mirror of
https://github.com/wader/fq.git
synced 2024-11-23 09:56:07 +03:00
Merge branch 'master' of https://github.com/wader/fq into avro
This commit is contained in:
commit
9636613ec6
2
.github/workflows/ci.yml
vendored
2
.github/workflows/ci.yml
vendored
@ -52,7 +52,7 @@ jobs:
|
||||
- name: Setup go
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: 1.17.6
|
||||
go-version: 1.17.7
|
||||
- name: Test
|
||||
env:
|
||||
GOARCH: ${{ matrix.goarch }}
|
||||
|
2
.github/workflows/release.yml
vendored
2
.github/workflows/release.yml
vendored
@ -19,7 +19,7 @@ jobs:
|
||||
- name: Setup go
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: 1.17.6
|
||||
go-version: 1.17.7
|
||||
- name: Run goreleaser
|
||||
uses: goreleaser/goreleaser-action@v2
|
||||
with:
|
||||
|
@ -1,5 +1,5 @@
|
||||
# bump: docker-golang /FROM golang:([\d.]+)/ docker:golang|^1
|
||||
FROM golang:1.17.6-bullseye AS base
|
||||
FROM golang:1.17.7-bullseye AS base
|
||||
|
||||
# expect is used to test cli
|
||||
RUN \
|
||||
|
2
LICENSE
2
LICENSE
@ -1,3 +1,5 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2021 Mattias Wadman
|
||||
colorjson fork and various code in gojqextra package Copyright (c) 2019-2021 itchyny
|
||||
|
||||
|
23
README.md
23
README.md
@ -147,6 +147,14 @@ xattr -d com.apple.quarantine fq && spctl --add fq
|
||||
brew install wader/tap/fq
|
||||
```
|
||||
|
||||
### Windows
|
||||
|
||||
`fq` can be installed via [scoop](https://scoop.sh/).
|
||||
|
||||
```powershell
|
||||
scoop install fq
|
||||
```
|
||||
|
||||
### Arch Linux
|
||||
|
||||
`fq` can be installed from the [community repository](https://archlinux.org/packages/community/x86_64/fq/) using [pacman](https://wiki.archlinux.org/title/Pacman):
|
||||
@ -228,3 +236,18 @@ for inventing the [jq](https://github.com/stedolan/jq) language.
|
||||
- [GNU poke](https://www.jemarch.net/poke)
|
||||
- [ffmpeg/ffprobe](https://ffmpeg.org)
|
||||
- [hexdump](https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git/tree/text-utils/hexdump.c)
|
||||
|
||||
## License
|
||||
|
||||
`fq` is distributed under the terms of the MIT License.
|
||||
|
||||
See the [LICENSE](LICENSE) file for license details.
|
||||
|
||||
Licenses of direct dependencies:
|
||||
|
||||
- Forked version of gojq https://github.com/itchyny/gojq/blob/main/LICENSE (MIT)
|
||||
- Forked version of readline https://github.com/chzyer/readline/blob/master/LICENSE (MIT)
|
||||
- gopacket https://github.com/google/gopacket/blob/master/LICENSE (BSD)
|
||||
- mapstructure https://github.com/mitchellh/mapstructure/blob/master/LICENSE (MIT)
|
||||
- go-difflib https://github.com/pmezard/go-difflib/blob/master/LICENSE (BSD)
|
||||
- golang/x/text https://github.com/golang/text/blob/master/LICENSE (BSD)
|
||||
|
@ -41,7 +41,8 @@ def urldecode:
|
||||
# ex: .frames | changes(.header.sample_rate)
|
||||
def changes(f): streaks_by(f)[].[0];
|
||||
|
||||
def radix62sp: radix(62; "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; {
|
||||
def toradix62sp: toradix(62; "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ");
|
||||
def fromradix62sp: fromradix(62; {
|
||||
"0": 0, "1": 1, "2": 2, "3": 3,"4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9,
|
||||
"a": 10, "b": 11, "c": 12, "d": 13, "e": 14, "f": 15, "g": 16,
|
||||
"h": 17, "i": 18, "j": 19, "k": 20, "l": 21, "m": 22, "n": 23,
|
||||
|
@ -1,8 +1,7 @@
|
||||
### Known bugs to fix
|
||||
|
||||
- `fq -n '"aabbccdd" | hex | tobytes[1:] | raw | tobytes'` create buffer `aabbcc` should be `bbccdd`. I think decode (raw in this case) is confused by root value buffer.
|
||||
- Buffers/string duality is confusing, most string functions should be wrapped to understand buffers.
|
||||
- `fq -n '[([0xab] | tobits[:4]), ([0xdc] | tobits[:4]), 0] | tobytes'` should create a `ad` buffer, now `a0`. Probably because use of `io.Copy` that will ends up padding on byte boundaries. Should use `bitio.Copy` and create a `bitio.Writer` that can transform to a `io.Writer`.
|
||||
- `fq -n '"aabbccdd" | hex | tobytes[1:] | raw | tobytes'` create binary `aabbcc` should be `bbccdd`. I think decode (raw in this case) is confused by root value buffer.
|
||||
- Buffers/string duality is confusing, most string functions should be wrapped to understand binary.
|
||||
- REPL cancel seems to sometimes exit a sub-REPl without properly cleanup options.
|
||||
- Value errors, can only be accessed with `._error`.
|
||||
- Framed (add unknown in gaps) decode should be on struct level not format?
|
||||
@ -14,6 +13,7 @@
|
||||
- Rework cli/repl user interrupt (context cancel via ctrl-c), see comment in Interp.Main
|
||||
- Optimize `Interp.Options` calls, now called per display. Cache per eval? needs to handle nested evals.
|
||||
- `<array decode value>[{start: ...: end: ...}]` syntax a bit broken.
|
||||
- REPL completion might have side effcts. Make interp.Function type know and wrap somehow? input, inputs, open, ...
|
||||
|
||||
### TODO and ideas
|
||||
|
||||
|
127
doc/dev.md
127
doc/dev.md
@ -32,6 +32,133 @@ Flags can be struct with bit-fields.
|
||||
- Can new formats be added to other formats
|
||||
- Does the new format include existing formats
|
||||
|
||||
### Decoder API
|
||||
|
||||
Readers use this convention `d.<Field>?<reader<length>?>|<type>Fn>(... [,scalar.Mapper...])`:
|
||||
- If starts with `Field` a field will be added and first argument will be name of field. If not it will just read.
|
||||
- `<reader<length>?>|<type>Fn>` a reader or a reader function
|
||||
- `<reader<length>?>` reader such as `U16` (unsigned 16 bit) or `UTF8` (utf8 and length as argument). Read bits using some decoder.
|
||||
- `<type>Fn>` read using a `func(d *decode.D) <type>` function.
|
||||
- This can be used to implement own custom readers.
|
||||
|
||||
All `Field` functions takes a var args of `scalar.Mapper`:s that will be applied after reading.
|
||||
|
||||
`<type>` are these go types and their name in the API:
|
||||
- `uint64` known as `U` (unsigned number)
|
||||
- `int64` known as `S` (singed number)
|
||||
- `float64` known as `F`
|
||||
- `string` known as `Str`
|
||||
- `bool` known as `Bool`,
|
||||
- `*big.Int` known as `BigInt`
|
||||
- `nil` null value known as `Nil`.
|
||||
|
||||
TODO: there are some more (BitBuf etc, should be renamed)
|
||||
|
||||
To add a struct or array use `d.FieldStruct(...)` and `d.FieldArray(...)`.
|
||||
|
||||
For example this decoder:
|
||||
|
||||
```go
|
||||
d.FieldUTF8("magic", 4) // read 4 byte UTF8 string and add it as "magic"
|
||||
d.FieldStruct("headers", func(d *decode.D) { // create a new struct and add it as "headers"
|
||||
d.FieldU8("type", scalar.UToSymStr{ // read 8 bit unsigned integer, map it and add it as "type
|
||||
1: "start",
|
||||
// ...
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
will produce something like this:
|
||||
|
||||
```go
|
||||
*decode.Value{
|
||||
Parent: nil,
|
||||
V: *decode.Compound{
|
||||
IsArray: false, // is struct
|
||||
Children: []*decode.Value{
|
||||
*decode.Value{
|
||||
Name: "magic",
|
||||
V: scalar.S{
|
||||
Actual: "abcd", // read and set by UTF8 reader
|
||||
},
|
||||
},
|
||||
*decode.Value{
|
||||
Parent: &... // ref parent *decode.Value>,
|
||||
Name: "headers",
|
||||
V: *decode.Compound{
|
||||
IsArray: false, // is struct
|
||||
Children: []*decode.Value{
|
||||
*decode.Value{
|
||||
Name: "type",
|
||||
V: scalar.S{
|
||||
Actual: uint64(1), // read and set by U8 reader
|
||||
Sym: "start", // set by UToSymStr scalar.Mapper
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
and will look like this in jq/JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"magic": "abcd",
|
||||
"headers": {
|
||||
"type": "start"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### *decode.D
|
||||
|
||||
This is the main type used during decoding. It keeps track of:
|
||||
|
||||
- A current array or struct `*decode.Value` where fields will be added to.
|
||||
- Current bit reader
|
||||
- Decode options
|
||||
- Default endian
|
||||
|
||||
New `*decode.D` are created during decoding when `d.FieldStruct` etc is used. It is also a kitchen sink of all kind functions for reading various standard number and string encodings etc.
|
||||
|
||||
Decoder authors do not have to create them.
|
||||
|
||||
#### decode.Value
|
||||
|
||||
Is what `*decode.D` produce and it used to represent the decoded structure, can be array, struct, number, string etc. It is the underlaying type used by `interp.DecodeValue` that implements `gojq.JQValue` to expose it as various jq types.
|
||||
|
||||
It stores:
|
||||
- Parent `*decode.Value` unless it's a root.
|
||||
- A decoded value, a `scalar.S` or `*decode.Compound` (struct or array)
|
||||
- Name in parent struct or array. If parent is a struct the name is unique.
|
||||
- Index in parent array. Not used if parent is a struct.
|
||||
- A bit range. Also struct and array have a range that is the min/max range of its children.
|
||||
- A bit reader where the bit range can be read from.
|
||||
|
||||
Decoder authors will probably not have to create them.
|
||||
|
||||
#### scalar.S
|
||||
|
||||
Keeps track of
|
||||
- Actual value. Decoded value represented using a go type like `uint64`, `string` etc. For example a value reader by a utf8 or utf16 reader both will ends up as a `string`.
|
||||
- Symbolic value. Optional symbolic representation of the actual value. For example a `scalar.UToSymStr` would map an actual `uint64` to a symbolic `string`.
|
||||
- String description of the value.
|
||||
- Number representation
|
||||
|
||||
The `scalar` package has `scalar.Mapper` implementations for all types to map actual to whole `scalar.S` value `scalar.<type>ToScalar` or to just to set symbolic value `scalar.<type>ToSym<type>`. There is also mappers to just set values or to change number representations `scalar.Hex`/`scalar.SymHex` etc.
|
||||
|
||||
Decoder authors will probably not have to create them. But you might implement your own `scalar.Mapper` to modify them.
|
||||
|
||||
#### *decode.Compound
|
||||
|
||||
Used to store struct or array of `*decode.Value`.
|
||||
|
||||
Decoder authors do not have to create them.
|
||||
|
||||
### Development tips
|
||||
|
||||
I ususally use `-d <format>` and `dv` while developing, that way you will get a decode tree
|
||||
|
135
doc/usage.md
135
doc/usage.md
@ -73,7 +73,7 @@ Default if not explicitly used `display` will only show the root level:
|
||||
|
||||
![fq demo](display_decode_value.svg)
|
||||
|
||||
First row shows ruler with byte offset into the line and JSON path for the value.
|
||||
First row shows ruler with byte offset into the line and jq path for the value.
|
||||
|
||||
The columns are:
|
||||
- Start address for the line. For example we see that `type` starts at `0xd60`+`0x09`.
|
||||
@ -109,7 +109,7 @@ There are also some other `display` aliases:
|
||||
The interactive [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop)
|
||||
has auto completion and nested REPL support:
|
||||
|
||||
```sh
|
||||
```
|
||||
# start REPL with null input
|
||||
$ fq -i
|
||||
null>
|
||||
@ -196,7 +196,7 @@ fq -n 'def f: .. | select(format=="avc_sps"); diff(input|f; input|f)' a.mp4 b.mp
|
||||
|
||||
#### Extract first JPEG found in file
|
||||
|
||||
Recursively look for first value that is a `jpeg` decode value root. Use `tobytes` to get bytes buffer for value. Redirect bytes to a file.
|
||||
Recursively look for first value that is a `jpeg` decode value root. Use `tobytes` to get bytes for value. Redirect bytes to a file.
|
||||
|
||||
```sh
|
||||
fq 'first(.. | select(format=="jpeg")) | tobytes' file > file.jpeg
|
||||
@ -269,6 +269,85 @@ single argument `1, 2` (a lambda expression that output `1` and then output `2`)
|
||||
achieved.
|
||||
- Expressions have one implicit input and output value. This how pipelines like `1 | . * 2` work.
|
||||
|
||||
|
||||
## Types specific to fq
|
||||
|
||||
fq has two additional types compared to jq, decode value and binary. In standard jq expressions they will in most case behave as some standard jq type.
|
||||
|
||||
### Decode value
|
||||
|
||||
This type is returned by decoders and it used to represent parts of the decoed input. It can act as all standard jq types, object, array, number, string etc.
|
||||
|
||||
Each decode value has these properties:
|
||||
- A bit range in the input
|
||||
- Can be accessed as a binary using `tobits`/`tobytes`. Use the `start` and `size` keys to postion and size.
|
||||
- `.name` as bytes `.name | tobytes`
|
||||
- Bit 4-8 of `.name` as bits `.name | tobits[4:8]`
|
||||
|
||||
Each non-compound decode value has these properties:
|
||||
- An actual value:
|
||||
- This is the decoded representation of the bits, a number, string, bool etc.
|
||||
- Can be accessed using `toactual`.
|
||||
- An optional symbolic value:
|
||||
- Is usually a mapping of the actual to symbolic value, ex: map number to a string value.
|
||||
- Can be accessed using `tosym`.
|
||||
- An optional description:
|
||||
- Can be accessed using `todescription`
|
||||
- `parent` is the parent decode value
|
||||
- `parents` is the all parent decode values
|
||||
- `topath` is the jq path for the decode value
|
||||
- `torepr` convert decode value to its representation if possible
|
||||
|
||||
The value of a decode value is the symbolic value if available and otherwise the actual value. To explicitly access the value use `tovalue`. In most expression this is not needed as it will be done automactically.
|
||||
|
||||
### Binary
|
||||
|
||||
Binaries are raw bits with a unit size, 1 (bits) or 8 (bytes), that can have a non-byte aligned size. Will act as byte padded strings in standard jq expressions.
|
||||
|
||||
Use `tobits` and `tobytes` to create them from a decode values, strings, numbers or binary arrays. `tobytes` will if needed zero pad most significant bits to be byte aligned.
|
||||
|
||||
There is also `tobitsrange` and `tobytesrange` which does the same thing but will preserve it's source range when displayed.
|
||||
|
||||
- `"string" | tobytes` produces a binary with UTF8 codepoint bytes.
|
||||
- `1234 | tobits` produces a binary with the unsigned big-endian integer 1234 with enough bits to represent the number. Use `tobytes` to get the same but with enough bytes to represent the number. This is different to how numbers works inside binary arrays where they are limited to 0-255.
|
||||
- `["abc", 123, ...] | tobytes` produce a binary from a binary array. See [binary array](#binary-array) below.
|
||||
- `.[index]` access bit or byte at index `index`. Index is in units.
|
||||
- `[0x12, 0x34, 0x56] | tobytes[1]` is `0x35`
|
||||
- `[0x12, 0x34, 0x56] | tobits[3]` is `1`
|
||||
- `.[start:]`, `.[start:end]` or `.[:end]` is normal jq slice syntax and will slice the binary from `start` to `end`. `start` and `end` is in units.
|
||||
- `[0x12, 0x34, 0x56] | tobytes[1:2]` will be a binary with the byte `0x34`
|
||||
- `[0x12, 0x34, 0x56] | tobits[4:12]` will be a binary with the byte `0x23`
|
||||
- `[0x12, 0x34, 0x56] | tobits[4:20]` will be a binary with the byte `0x23`, `0x45`
|
||||
- `[0x12, 0x34, 0x56] | tobits[4:20] | tobytes[1:]` will be a binary with the byte `0x45`,
|
||||
|
||||
Both `.[index]` and `.[start:end]` support negative indices to index from end.
|
||||
|
||||
TODO: tobytesrange, padding
|
||||
|
||||
#### Binary array
|
||||
|
||||
Is an array of numbers, strings, binaries or other nested binary arrays. When used as input to `tobits`/`tobytes` the following rules are used:
|
||||
- Number is a byte with value be 0-255
|
||||
- String it's UTF8 codepoint bytes
|
||||
- Binary as is
|
||||
- Binary array used recursively
|
||||
|
||||
Binary arrays are similar to and inspired by [Erlang iolist](https://www.erlang.org/doc/man/erlang.html#type-iolist).
|
||||
|
||||
Some examples:
|
||||
|
||||
`[0, 123, 255] | tobytes` will be binary with 3 bytes 0, 123 and 255
|
||||
|
||||
`[0, [123, 255]] | tobytes` same as above
|
||||
|
||||
`[0, 1, 1, 0, 0, 1, 1, 0 | tobits]` will be binary with 1 byte, 0x66 an "f"
|
||||
|
||||
`[(.a | tobytes[-10:]), 255, (.b | tobits[:10])] | tobytes` the concatenation of the last 10 bytes of `.a`, a byte with value 255 and the first 10 bits of `.b`.
|
||||
|
||||
The difference between `tobits` and `tobytes` is
|
||||
|
||||
TODO: padding and alignment
|
||||
|
||||
## Functions
|
||||
|
||||
- All standard library functions from jq
|
||||
@ -297,32 +376,32 @@ unary uses input and if more than one argument all as arguments ignoring the inp
|
||||
- `todescription` description of value
|
||||
- `torepr` convert decode value into what it reptresents. For example convert msgpack decode value
|
||||
into a value representing its JSON representation.
|
||||
- All regexp functions work with buffers as input and pattern argument with these differences
|
||||
from the string versions:
|
||||
- All regexp functions work with binary as input and pattern argument with these differences
|
||||
compared to when using string input:
|
||||
- All offset and length will be in bytes.
|
||||
- For `capture` the `.string` value is a buffer.
|
||||
- If pattern is a buffer it will be matched literally and not as a regexp.
|
||||
- If pattern is a buffer or flags include "b" each input byte will be read as separate code points
|
||||
- `scan_toend($v)`, `scan_toend($v; $flags)` works the same as `scan` but output buffer are from start of match to
|
||||
end of buffer.
|
||||
- For `capture` the `.string` value is a binary.
|
||||
- If pattern is a binary it will be matched literally and not as a regexp.
|
||||
- If pattern is a binary or flags include "b" each input byte will be read as separate code points
|
||||
- `scan_toend($v)`, `scan_toend($v; $flags)` works the same as `scan` but output binary are from start of match to
|
||||
end of binary.
|
||||
instead of possibly multi-byte UTF-8 codepoints. This allows to match raw bytes. Ex: `match("\u00ff"; "b")`
|
||||
will match the byte `0xff` and not the UTF-8 encoded codepoint for 255, `match("[^\u00ff]"; "b")` will match
|
||||
all non-`0xff` bytes.
|
||||
- `grep` functions take 1 or 2 arguments. First is a scalar to match, where a string is
|
||||
treated as a regexp. A buffer scalar will be matches exact bytes. Second argument are regexp
|
||||
flags with addition that "b" will treat each byte in the input buffer as a code point, this
|
||||
treated as a regexp. A binary will be matches exact bytes. Second argument are regexp
|
||||
flags with addition that "b" will treat each byte in the input binary as a code point, this
|
||||
makes it possible to match exact bytes.
|
||||
- `grep($v)`, `grep($v; $flags)` recursively match value and buffer
|
||||
- `grep($v)`, `grep($v; $flags)` recursively match value and binary
|
||||
- `vgrep($v)`, `vgrep($v; $flags)` recursively match value
|
||||
- `bgrep($v)`, `bgrep($v; $flags)` recursively match buffer
|
||||
- `bgrep($v)`, `bgrep($v; $flags)` recursively match binary
|
||||
- `fgrep($v)`, `fgrep($v; $flags)` recursively match field name
|
||||
- `grep_by(f)` recursively match using a filter. Ex: `grep_by(. > 180 and . < 200)`, `first(grep_by(format == "id3v2"))`.
|
||||
- Buffers:
|
||||
- `tobits` - Transform input into a bits buffer not preserving source range, will start at zero.
|
||||
- `tobitsrange` - Transform input into a bits buffer preserving source range if possible.
|
||||
- `tobytes` - Transform input into a bytes buffer not preserving source range, will start at zero.
|
||||
- `tobytesrange` - Transform input into a byte buffer preserving source range if possible.
|
||||
- `buffer[start:end]`, `buffer[:end]`, `buffer[start:]` - Create a sub buffer from start to end in buffer units preserving source range.
|
||||
- Binary:
|
||||
- `tobits` - Transform input to binary with bit as unit, does not preserving source range, will start at zero.
|
||||
- `tobitsrange` - Transform input to binary with bit as unit, preserves source range if possible.
|
||||
- `tobytes` - Transform input to binary with byte as unit, does not preserving source range, will start at zero.
|
||||
- `tobytesrange` - Transform input binary with byte as unit, preserves source range if possible.
|
||||
- `.[start:end]`, `.[:end]`, `.[start:]` - Slice binary from start to end preserving source range.
|
||||
- `open` open file for reading
|
||||
- All decode function takes a optional option argument. The only option currently is `force` to ignore decoder asserts.
|
||||
For example to decode as mp3 and ignore assets do `mp3({force: true})` or `decode("mp3"; {force: true})`, from command line
|
||||
@ -330,15 +409,17 @@ you currently have to do `fq -d raw 'mp3({force: true})' file`.
|
||||
- `decode`, `decode($format)`, `decode($format; $opts)` decode format
|
||||
- `probe`, `probe($opts)` probe and decode format
|
||||
- `mp3`, `mp3($opts)`, ..., `<name>`, `<name>($opts)` same as `decode(<name>)($opts)`, `decode($format; $opts)` decode as format
|
||||
- Display shows hexdump/ASCII/tree for decode values and JSON for other values.
|
||||
- `d`/`d($opts)` display value and truncate long arrays and buffers
|
||||
- Display shows hexdump/ASCII/tree for decode values and jq value for other types.
|
||||
- `d`/`d($opts)` display value and truncate long arrays and binaries
|
||||
- `da`/`da($opts)` display value and don't truncate arrays
|
||||
- `dd`/`dd($opts)` display value and don't truncate arrays or buffers
|
||||
- `dv`/`dv($opts)` verbosely display value and don't truncate arrays but truncate buffers
|
||||
- `ddv`/`ddv($opts)` verbosely display value and don't truncate arrays or buffers
|
||||
- `dd`/`dd($opts)` display value and don't truncate arrays or binaries
|
||||
- `dv`/`dv($opts)` verbosely display value and don't truncate arrays but truncate binaries
|
||||
- `ddv`/`ddv($opts)` verbosely display value and don't truncate arrays or binaries
|
||||
- `p`/`preview` show preview of field tree
|
||||
- `hd`/`hexdump` hexdump value
|
||||
- `repl` nested REPL, must be last in a pipeline. `1 | repl`, can "slurp" outputs `1, 2, 3 | repl`.
|
||||
- `paste` read string from stdin until ^D. Useful for pasting text.
|
||||
- Ex: `paste | frompem | asn1_ber | repl` read from stdin then decode and start a new sub-REPL with result.
|
||||
|
||||
## Color and unicode output
|
||||
|
||||
@ -397,10 +478,6 @@ A value has these special keys (TODO: remove, are internal)
|
||||
|
||||
- TODO: unknown gaps
|
||||
|
||||
## Binary and IO lists
|
||||
|
||||
- TODO: similar to erlang io lists, [], binary, string (utf8) and numbers
|
||||
|
||||
## Own decoders and use as library
|
||||
|
||||
TODO
|
||||
|
@ -15,6 +15,7 @@ import (
|
||||
|
||||
_ "github.com/wader/fq/format/all"
|
||||
"github.com/wader/fq/format/registry"
|
||||
"github.com/wader/fq/pkg/decode"
|
||||
"github.com/wader/fq/pkg/interp"
|
||||
)
|
||||
|
||||
@ -26,6 +27,7 @@ func (fuzzFS) Open(name string) (fs.File, error) {
|
||||
|
||||
type fuzzTest struct {
|
||||
b []byte
|
||||
f decode.Format
|
||||
}
|
||||
|
||||
type fuzzTestInput struct {
|
||||
@ -47,22 +49,22 @@ func (ft *fuzzTest) Platform() interp.Platform { return interp.Platform{} }
|
||||
func (ft *fuzzTest) Stdin() interp.Input {
|
||||
return fuzzTestInput{FileReader: interp.FileReader{R: bytes.NewBuffer(ft.b)}}
|
||||
}
|
||||
func (ft *fuzzTest) Stdout() interp.Output { return fuzzTestOutput{os.Stdout} }
|
||||
func (ft *fuzzTest) Stderr() interp.Output { return fuzzTestOutput{os.Stderr} }
|
||||
func (ft *fuzzTest) Stdout() interp.Output { return fuzzTestOutput{ioutil.Discard} }
|
||||
func (ft *fuzzTest) Stderr() interp.Output { return fuzzTestOutput{ioutil.Discard} }
|
||||
func (ft *fuzzTest) InterruptChan() chan struct{} { return nil }
|
||||
func (ft *fuzzTest) Environ() []string { return nil }
|
||||
func (ft *fuzzTest) Args() []string {
|
||||
return []string{
|
||||
`fq`,
|
||||
`-d`, `raw`,
|
||||
`(_registry.groups | keys[] | select(. != "all")) as $f | decode($f)?`,
|
||||
`-d`, ft.f.Name,
|
||||
`.`,
|
||||
}
|
||||
}
|
||||
func (ft *fuzzTest) ConfigDir() (string, error) { return "/config", nil }
|
||||
func (ft *fuzzTest) FS() fs.FS { return fuzzFS{} }
|
||||
func (ft *fuzzTest) History() ([]string, error) { return nil, nil }
|
||||
|
||||
func (ft *fuzzTest) Readline(prompt string, complete func(line string, pos int) (newLine []string, shared int)) (string, error) {
|
||||
func (ft *fuzzTest) Readline(opts interp.ReadlineOpts) (string, error) {
|
||||
return "", io.EOF
|
||||
}
|
||||
|
||||
@ -92,8 +94,11 @@ func FuzzFormats(f *testing.F) {
|
||||
return nil
|
||||
})
|
||||
|
||||
gi := 0
|
||||
g := registry.Default.MustAll()
|
||||
|
||||
f.Fuzz(func(t *testing.T, b []byte) {
|
||||
fz := &fuzzTest{b: b}
|
||||
fz := &fuzzTest{b: b, f: g[gi]}
|
||||
q, err := interp.New(fz, registry.Default)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
@ -104,5 +109,7 @@ func FuzzFormats(f *testing.F) {
|
||||
// // TODO: expect error
|
||||
// t.Fatal(err)
|
||||
// }
|
||||
|
||||
gi = (gi + 1) % len(g)
|
||||
})
|
||||
}
|
||||
|
@ -168,6 +168,8 @@ func spuDecode(d *decode.D, in interface{}) interface{} {
|
||||
size := d.FieldU16("size")
|
||||
// TODO
|
||||
d.FieldRawLen("data", int64(size)*8)
|
||||
default:
|
||||
d.Fatalf("unknown command %d", cmd)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
2
go.mod
2
go.mod
@ -23,7 +23,7 @@ require (
|
||||
|
||||
require (
|
||||
// fork of github.com/itchyny/gojq, see github.com/wader/gojq fq branch
|
||||
github.com/wader/gojq v0.12.1-0.20220108235115-6a05b6c59ace
|
||||
github.com/wader/gojq v0.12.1-0.20220212115358-b98ce15ac16e
|
||||
// fork of github.com/chzyer/readline, see github.com/wader/readline fq branch
|
||||
github.com/wader/readline v0.0.0-20220117233529-692d84ca36e2
|
||||
)
|
||||
|
4
go.sum
4
go.sum
@ -11,8 +11,8 @@ github.com/mitchellh/mapstructure v1.4.3 h1:OVowDSCllw/YjdLkam3/sm7wEtOy59d8ndGg
|
||||
github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/wader/gojq v0.12.1-0.20220108235115-6a05b6c59ace h1:pt07NaC7OhePrQVRKRxZy9umeWkjr28AmbtQC9CrtVQ=
|
||||
github.com/wader/gojq v0.12.1-0.20220108235115-6a05b6c59ace/go.mod h1:tdC5h6dXdwAJs7eJUw4681AzsgfOSBrAV+cZzEbCZs4=
|
||||
github.com/wader/gojq v0.12.1-0.20220212115358-b98ce15ac16e h1:gDujpnVkZWYgHVrNbN3FikM2Khp+n/J9NsLEQhl3IFc=
|
||||
github.com/wader/gojq v0.12.1-0.20220212115358-b98ce15ac16e/go.mod h1:tdC5h6dXdwAJs7eJUw4681AzsgfOSBrAV+cZzEbCZs4=
|
||||
github.com/wader/readline v0.0.0-20220117233529-692d84ca36e2 h1:AK4wt6mSypGEVAzUcCfrJqVD5hju+w81b9J/k0swV/8=
|
||||
github.com/wader/readline v0.0.0-20220117233529-692d84ca36e2/go.mod h1:TJUJCkylZhI0Z07t2Nw6l6Ck7NiZqUpnMlkjEzN7+yM=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
|
58
internal/bitioextra/zeroreadatseeker.go
Normal file
58
internal/bitioextra/zeroreadatseeker.go
Normal file
@ -0,0 +1,58 @@
|
||||
package bitioextra
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/wader/fq/pkg/bitio"
|
||||
)
|
||||
|
||||
type ZeroReadAtSeeker struct {
|
||||
pos int64
|
||||
nBits int64
|
||||
}
|
||||
|
||||
func NewZeroAtSeeker(nBits int64) *ZeroReadAtSeeker {
|
||||
return &ZeroReadAtSeeker{nBits: nBits}
|
||||
}
|
||||
|
||||
func (z *ZeroReadAtSeeker) SeekBits(bitOffset int64, whence int) (int64, error) {
|
||||
p := z.pos
|
||||
switch whence {
|
||||
case io.SeekStart:
|
||||
p = bitOffset
|
||||
case io.SeekCurrent:
|
||||
p += bitOffset
|
||||
case io.SeekEnd:
|
||||
p = z.nBits + bitOffset
|
||||
default:
|
||||
panic("unknown whence")
|
||||
}
|
||||
|
||||
if p < 0 || p > z.nBits {
|
||||
return z.pos, bitio.ErrOffset
|
||||
}
|
||||
z.pos = p
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (z *ZeroReadAtSeeker) ReadBitsAt(p []byte, nBits int64, bitOff int64) (n int64, err error) {
|
||||
if bitOff < 0 || bitOff > z.nBits {
|
||||
return 0, bitio.ErrOffset
|
||||
}
|
||||
if bitOff == z.nBits {
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
lBits := z.nBits - bitOff
|
||||
rBits := nBits
|
||||
if rBits > lBits {
|
||||
rBits = lBits
|
||||
}
|
||||
rBytes := bitio.BitsByteCount(rBits)
|
||||
for i := int64(0); i < rBytes; i++ {
|
||||
p[i] = 0
|
||||
}
|
||||
|
||||
return rBits, nil
|
||||
}
|
@ -149,8 +149,8 @@ func (cr *CaseRun) ConfigDir() (string, error) { return "/config", nil }
|
||||
|
||||
func (cr *CaseRun) FS() fs.FS { return cr.Case }
|
||||
|
||||
func (cr *CaseRun) Readline(prompt string, complete func(line string, pos int) (newLine []string, shared int)) (string, error) {
|
||||
cr.ActualStdoutBuf.WriteString(prompt)
|
||||
func (cr *CaseRun) Readline(opts interp.ReadlineOpts) (string, error) {
|
||||
cr.ActualStdoutBuf.WriteString(opts.Prompt)
|
||||
if cr.ReadlinesPos >= len(cr.Readlines) {
|
||||
return "", io.EOF
|
||||
}
|
||||
@ -165,7 +165,7 @@ func (cr *CaseRun) Readline(prompt string, complete func(line string, pos int) (
|
||||
cr.ActualStdoutBuf.WriteString(lineRaw + "\n")
|
||||
|
||||
l := len(line) - 1
|
||||
newLine, shared := complete(line[0:l], l)
|
||||
newLine, shared := opts.CompleteFn(line[0:l], l)
|
||||
// TODO: shared
|
||||
_ = shared
|
||||
for _, nl := range newLine {
|
||||
|
@ -1,18 +0,0 @@
|
||||
The bitio package tries to mimic the standard library packages io and bytes as much as possible.
|
||||
|
||||
- bitio.Buffer same as bytes.Buffer
|
||||
- bitio.IOBitReadSeeker is a bitio.ReaderAtSeeker that from a io.ReadSeeker
|
||||
- bitio.IOBitWriter a bitio.BitWriter that write bytes to a io.Writer, use Flush() to write possible unaligned byte
|
||||
- bitio.IOReader is a io.Reader that reads bytes from a bit reader, will zero pad on unaligned byte eof
|
||||
- bitio.IOReadSeeker is a io.ReadSeeker that read/seek bytes in a bit stream, will zero pad on unaligned - bitio.NewBitReader same as bytes.NewReader
|
||||
- bitio.LimitReader same as io.LimitReader
|
||||
- bitio.MultiReader same as io.MultiReader
|
||||
- bitio.SectionReader same as io.SectionReader
|
||||
- bitio.Copy* same as io.Copy*
|
||||
- bitio.ReadFull same as io.ReadFull
|
||||
|
||||
TODO:
|
||||
- bitio.IOBitReader bitio.Reader that reads from a io.Reader
|
||||
- bitio.IOBitWriteSeeker bitio.BitWriteSeeker that writes to a io.WriteSeeker
|
||||
- bitio.CopyN
|
||||
- speed up read by using a cache somehow ([]byte or just a uint64?)
|
@ -8,40 +8,43 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ErrOffset means seek positions is invalid
|
||||
var ErrOffset = errors.New("invalid seek offset")
|
||||
|
||||
// ErrNegativeNBits means read tried to read negative number of bits
|
||||
var ErrNegativeNBits = errors.New("negative number of bits")
|
||||
|
||||
// Reader is something that reads bits
|
||||
// Similar to io.Reader
|
||||
// Reader is something that reads bits.
|
||||
// Similar to io.Reader.
|
||||
type Reader interface {
|
||||
ReadBits(p []byte, nBits int64) (n int64, err error)
|
||||
}
|
||||
|
||||
// Writer is something that writs bits
|
||||
// Similar to io.Writer
|
||||
// Writer is something that writs bits.
|
||||
// Similar to io.Writer.
|
||||
type Writer interface {
|
||||
WriteBits(p []byte, nBits int64) (n int64, err error)
|
||||
}
|
||||
|
||||
// Seeker is something that seeks bits
|
||||
// Similar to io.Seeker
|
||||
// Similar to io.Seeker.
|
||||
type Seeker interface {
|
||||
SeekBits(bitOffset int64, whence int) (int64, error)
|
||||
}
|
||||
|
||||
// ReaderAt is something that reads bits at an offset
|
||||
// Similar to io.ReaderAt
|
||||
// ReaderAt is something that reads bits at an offset.
|
||||
// Similar to io.ReaderAt.
|
||||
type ReaderAt interface {
|
||||
ReadBitsAt(p []byte, nBits int64, bitOff int64) (n int64, err error)
|
||||
}
|
||||
|
||||
// ReadSeeker is bitio.Reader and bitio.Seeker
|
||||
// ReadSeeker is bitio.Reader and bitio.Seeker.
|
||||
type ReadSeeker interface {
|
||||
Reader
|
||||
Seeker
|
||||
}
|
||||
|
||||
// ReadAtSeeker is bitio.ReaderAt and bitio.Seeker
|
||||
// ReadAtSeeker is bitio.ReaderAt and bitio.Seeker.
|
||||
type ReadAtSeeker interface {
|
||||
ReaderAt
|
||||
Seeker
|
||||
@ -54,9 +57,9 @@ type ReaderAtSeeker interface {
|
||||
Seeker
|
||||
}
|
||||
|
||||
// NewBitReader reader reading nBits bits from a []byte
|
||||
// NewBitReader reader reading nBits bits from a []byte.
|
||||
// If nBits is -1 all bits will be used.
|
||||
// Similar to bytes.NewReader
|
||||
// Similar to bytes.NewReader.
|
||||
func NewBitReader(buf []byte, nBits int64) *SectionReader {
|
||||
if nBits < 0 {
|
||||
nBits = int64(len(buf)) * 8
|
||||
@ -68,7 +71,7 @@ func NewBitReader(buf []byte, nBits int64) *SectionReader {
|
||||
)
|
||||
}
|
||||
|
||||
// BitsByteCount returns smallest amount of bytes to fit nBits bits
|
||||
// BitsByteCount returns smallest amount of bytes to fit nBits bits.
|
||||
func BitsByteCount(nBits int64) int64 {
|
||||
n := nBits / 8
|
||||
if nBits%8 != 0 {
|
||||
@ -77,7 +80,7 @@ func BitsByteCount(nBits int64) int64 {
|
||||
return n
|
||||
}
|
||||
|
||||
// BytesFromBitString []byte from bit string, ex: "0101" -> ([]byte{0x50}, 4)
|
||||
// BytesFromBitString from []byte to bit string, ex: "0101" -> ([]byte{0x50}, 4)
|
||||
func BytesFromBitString(s string) ([]byte, int64) {
|
||||
r := len(s) % 8
|
||||
bufLen := len(s) / 8
|
||||
@ -97,7 +100,7 @@ func BytesFromBitString(s string) ([]byte, int64) {
|
||||
return buf, int64(len(s))
|
||||
}
|
||||
|
||||
// BitStringFromBytes string from []byte], ex: ([]byte{0x50}, 4) -> "0101"
|
||||
// BitStringFromBytes from string to []byte, ex: ([]byte{0x50}, 4) -> "0101"
|
||||
func BitStringFromBytes(buf []byte, nBits int64) string {
|
||||
sb := &strings.Builder{}
|
||||
for i := int64(0); i < nBits; i++ {
|
||||
@ -110,8 +113,8 @@ func BitStringFromBytes(buf []byte, nBits int64) string {
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
// CopyBuffer bits from src to dst using provided buffer
|
||||
// Similar to io.CopyBuffer
|
||||
// CopyBuffer bits from src to dst using provided byte buffer.
|
||||
// Similar to io.CopyBuffer.
|
||||
func CopyBuffer(dst Writer, src Reader, buf []byte) (n int64, err error) {
|
||||
// same default size as io.Copy
|
||||
if buf == nil {
|
||||
@ -144,8 +147,8 @@ func CopyBuffer(dst Writer, src Reader, buf []byte) (n int64, err error) {
|
||||
return written, err
|
||||
}
|
||||
|
||||
// Copy bits from src to dst
|
||||
// Similar to io.Copy
|
||||
// Copy bits from src to dst.
|
||||
// Similar to io.Copy.
|
||||
func Copy(dst Writer, src Reader) (n int64, err error) {
|
||||
return CopyBuffer(dst, src, nil)
|
||||
}
|
||||
@ -214,12 +217,16 @@ func readFull(p []byte, nBits int64, bitOff int64, fn func(p []byte, nBits int64
|
||||
return nBits, nil
|
||||
}
|
||||
|
||||
// ReadAtFull expects to read nBits from r at bitOff.
|
||||
// Similar to io.ReadFull.
|
||||
func ReadAtFull(r ReaderAt, p []byte, nBits int64, bitOff int64) (int64, error) {
|
||||
return readFull(p, nBits, bitOff, func(p []byte, nBits int64, bitOff int64) (int64, error) {
|
||||
return r.ReadBitsAt(p, nBits, bitOff)
|
||||
})
|
||||
}
|
||||
|
||||
// ReadFull expects to read nBits from r.
|
||||
// Similar to io.ReadFull.
|
||||
func ReadFull(r Reader, p []byte, nBits int64) (int64, error) {
|
||||
return readFull(p, nBits, 0, func(p []byte, nBits int64, bitOff int64) (int64, error) {
|
||||
return r.ReadBits(p, nBits)
|
||||
|
@ -6,8 +6,8 @@ import (
|
||||
"io"
|
||||
)
|
||||
|
||||
// Buffer is a bitio.Reader and bitio.Writer providing a bit buffer
|
||||
// Similar to bytes.Buffer
|
||||
// Buffer is a bitio.Reader and bitio.Writer providing a bit buffer.
|
||||
// Similar to bytes.Buffer.
|
||||
type Buffer struct {
|
||||
buf []byte
|
||||
bufBits int64
|
||||
|
34
pkg/bitio/doc.go
Normal file
34
pkg/bitio/doc.go
Normal file
@ -0,0 +1,34 @@
|
||||
// Package bitio tries to mimic the standard library packages io and bytes but for bits.
|
||||
//
|
||||
// - bitio.Buffer same as bytes.Buffer
|
||||
//
|
||||
// - bitio.IOBitReadSeeker is a bitio.ReaderAtSeeker that reads from a io.ReadSeeker
|
||||
//
|
||||
// - bitio.IOBitWriter a bitio.BitWriter that writes to a io.Writer, use Flush() to write possible zero padded unaligned byte
|
||||
//
|
||||
// - bitio.IOReader is a io.Reader that reads bytes from a bitio.Reader, will zero pad unaligned byte at EOF
|
||||
//
|
||||
// - bitio.IOReadSeeker is a io.ReadSeeker that reads from a bitio.ReadSeeker, will zero pad unaligned byte at EOF
|
||||
//
|
||||
// - bitio.NewBitReader same as bytes.NewReader
|
||||
//
|
||||
// - bitio.LimitReader same as io.LimitReader
|
||||
//
|
||||
// - bitio.MultiReader same as io.MultiReader
|
||||
//
|
||||
// - bitio.SectionReader same as io.SectionReader
|
||||
//
|
||||
// - bitio.Copy* same as io.Copy*
|
||||
//
|
||||
// - bitio.ReadFull same as io.ReadFull
|
||||
//
|
||||
// TODO:
|
||||
//
|
||||
// - bitio.IOBitReader bitio.Reader that reads from a io.Reader
|
||||
//
|
||||
// - bitio.IOBitWriteSeeker bitio.BitWriteSeeker that writes to a io.WriteSeeker
|
||||
//
|
||||
// - bitio.CopyN
|
||||
//
|
||||
// - Speed up read by using a cache somehow ([]byte or just a uint64?)
|
||||
package bitio
|
@ -5,13 +5,14 @@ import (
|
||||
"io"
|
||||
)
|
||||
|
||||
// IOBitReadSeeker is a bitio.BitReadAtSeeker reading from a io.ReadSeeker
|
||||
// IOBitReadSeeker is a bitio.ReadAtSeeker reading from a io.ReadSeeker.
|
||||
type IOBitReadSeeker struct {
|
||||
bitPos int64
|
||||
rs io.ReadSeeker
|
||||
buf []byte
|
||||
}
|
||||
|
||||
// NewIOBitReadSeeker returns a new bitio.IOBitReadSeeker
|
||||
func NewIOBitReadSeeker(rs io.ReadSeeker) *IOBitReadSeeker {
|
||||
return &IOBitReadSeeker{
|
||||
bitPos: 0,
|
||||
|
@ -4,14 +4,14 @@ import (
|
||||
"io"
|
||||
)
|
||||
|
||||
// IOBitWriter is a bitio.BitWriter that writes to a io.Writer
|
||||
// IOBitWriter is a bitio.Writer that writes to a io.Writer.
|
||||
// Use Flush to write possible unaligned byte zero bit padded.
|
||||
type IOBitWriter struct {
|
||||
w io.Writer
|
||||
b Buffer
|
||||
}
|
||||
|
||||
// NewIOBitWriter returns a new bitio.IOBitWriter
|
||||
// NewIOBitWriter returns a new bitio.IOBitWriter.
|
||||
func NewIOBitWriter(w io.Writer) *IOBitWriter {
|
||||
return &IOBitWriter{w: w}
|
||||
}
|
||||
|
@ -5,15 +5,15 @@ import (
|
||||
"io"
|
||||
)
|
||||
|
||||
// IOReader is a io.Reader and io.ByteReader that reads from a bitio.Reader
|
||||
// Unaligned byte at EOF will be zero bit padded
|
||||
// IOReader is a io.Reader and io.ByteReader that reads from a bitio.Reader.
|
||||
// Unaligned byte at EOF will be zero bit padded.
|
||||
type IOReader struct {
|
||||
r Reader
|
||||
rErr error
|
||||
b Buffer
|
||||
}
|
||||
|
||||
// NewIOReader returns a new bitio.IOReader
|
||||
// NewIOReader returns a new bitio.IOReader.
|
||||
func NewIOReader(r Reader) *IOReader {
|
||||
return &IOReader{r: r}
|
||||
}
|
||||
@ -22,8 +22,6 @@ func (r *IOReader) Read(p []byte) (n int, err error) {
|
||||
var ns int64
|
||||
|
||||
for {
|
||||
var err error
|
||||
|
||||
// uses p even if returning nothing, io.Reader docs says:
|
||||
// "it may use all of p as scratch space during the call"
|
||||
if r.rErr == nil {
|
||||
@ -62,7 +60,7 @@ func (r *IOReader) Read(p []byte) (n int, err error) {
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return 1, nil
|
||||
return 1, r.rErr
|
||||
}
|
||||
return 0, r.rErr
|
||||
}
|
||||
|
@ -1,14 +1,14 @@
|
||||
package bitio
|
||||
|
||||
// IOReadSeeker is a io.ReadSeeker that reads and seeks from a bitio.BitReadSeeker
|
||||
// Unaligned byte at EOF will be zero bit padded
|
||||
// IOReadSeeker is a io.ReadSeeker that reads from a bitio.ReadSeeker.
|
||||
// Unaligned byte at EOF will be zero bit padded.
|
||||
type IOReadSeeker struct {
|
||||
IOReader
|
||||
s Seeker
|
||||
sPos int64
|
||||
}
|
||||
|
||||
// NewIOReadSeeker return a new bitio.IOReadSeeker
|
||||
// NewIOReadSeeker return a new bitio.IOReadSeeker.
|
||||
func NewIOReadSeeker(rs ReadSeeker) *IOReadSeeker {
|
||||
return &IOReadSeeker{
|
||||
IOReader: IOReader{r: rs},
|
||||
|
@ -108,7 +108,7 @@ func Test(t *testing.T) {
|
||||
for _, p := range bsParts {
|
||||
bsBRs = append(bsBRs, sb(p))
|
||||
}
|
||||
bsBR, err := bitio.NewMultiBitReader(bsBRs...)
|
||||
bsBR, err := bitio.NewMultiReader(bsBRs...)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
@ -4,14 +4,14 @@ import (
|
||||
"io"
|
||||
)
|
||||
|
||||
// LimitReader is a bitio.Reader that reads a limited amount of bits from a bitio.Reader
|
||||
// Similar to bytes.LimitedReader
|
||||
// LimitReader is a bitio.Reader that reads a limited amount of bits from a bitio.Reader.
|
||||
// Similar to bytes.LimitedReader.
|
||||
type LimitReader struct {
|
||||
r Reader
|
||||
n int64
|
||||
}
|
||||
|
||||
// NewLimitReader returns a new bitio.LimitReader
|
||||
// NewLimitReader returns a new bitio.LimitReader.
|
||||
func NewLimitReader(r Reader, n int64) *LimitReader { return &LimitReader{r, n} }
|
||||
|
||||
func (l *LimitReader) ReadBits(p []byte, nBits int64) (n int64, err error) {
|
||||
|
@ -23,15 +23,16 @@ func endPos(rs Seeker) (int64, error) {
|
||||
return e, nil
|
||||
}
|
||||
|
||||
// MultiReader is a bitio.ReaderAtSeeker concatinating multiple bitio.ReadAtSeeker:s
|
||||
// Similar to io.MultiReader
|
||||
// MultiReader is a bitio.ReaderAtSeeker concatinating multiple bitio.ReadAtSeeker:s.
|
||||
// Similar to io.MultiReader.
|
||||
type MultiReader struct {
|
||||
pos int64
|
||||
readers []ReadAtSeeker
|
||||
readerEnds []int64
|
||||
}
|
||||
|
||||
func NewMultiBitReader(rs ...ReadAtSeeker) (*MultiReader, error) {
|
||||
// NewMultiReader returns a new bitio.MultiReader.
|
||||
func NewMultiReader(rs ...ReadAtSeeker) (*MultiReader, error) {
|
||||
readerEnds := make([]int64, len(rs))
|
||||
var esSum int64
|
||||
for i, r := range rs {
|
||||
|
@ -90,6 +90,8 @@ func Read64(buf []byte, firstBit int64, nBits int64) uint64 {
|
||||
return n
|
||||
}
|
||||
|
||||
// Write64 writes nBits bits large unsigned integer to buf starting from firstBit.
|
||||
// Integer is written most significant bit first.
|
||||
func Write64(v uint64, nBits int64, buf []byte, firstBit int64) {
|
||||
if nBits < 0 || nBits > 64 {
|
||||
panic(fmt.Sprintf("nBits must be 0-64 (%d)", nBits))
|
||||
|
@ -2,8 +2,8 @@ package bitio
|
||||
|
||||
import "fmt"
|
||||
|
||||
// ReverseBytes64 reverses the bytes part of the lowest nBits
|
||||
// Similar to bits.ReverseBytes64 but only rotates the lowest bytes and rest of bytes will be zero
|
||||
// ReverseBytes64 reverses the bytes part of the lowest nBits.
|
||||
// Similar to bits.ReverseBytes64 but only rotates the lowest bytes and rest of bytes will be zero.
|
||||
func ReverseBytes64(nBits int, n uint64) uint64 {
|
||||
switch {
|
||||
case nBits <= 8:
|
||||
|
@ -4,8 +4,8 @@ import (
|
||||
"io"
|
||||
)
|
||||
|
||||
// SectionReader is a bitio.BitReaderAtSeeker reading a section of a bitio.ReaderAt
|
||||
// Similar to io.SectionReader
|
||||
// SectionReader is a bitio.BitReaderAtSeeker reading a section of a bitio.ReaderAt.
|
||||
// Similar to io.SectionReader.
|
||||
type SectionReader struct {
|
||||
r ReaderAt
|
||||
bitBase int64
|
||||
@ -13,7 +13,7 @@ type SectionReader struct {
|
||||
bitLimit int64
|
||||
}
|
||||
|
||||
// NewSectionReader returns a new bitio.SectionReader
|
||||
// NewSectionReader returns a new bitio.SectionReader.
|
||||
func NewSectionReader(r ReaderAt, bitOff int64, nBits int64) *SectionReader {
|
||||
return &SectionReader{
|
||||
r: r,
|
||||
|
@ -158,7 +158,7 @@ func (stdOSFS) Open(name string) (fs.File, error) { return os.Open(name) }
|
||||
|
||||
func (*stdOS) FS() fs.FS { return stdOSFS{} }
|
||||
|
||||
func (o *stdOS) Readline(prompt string, complete func(line string, pos int) (newLine []string, shared int)) (string, error) {
|
||||
func (o *stdOS) Readline(opts interp.ReadlineOpts) (string, error) {
|
||||
if o.rl == nil {
|
||||
var err error
|
||||
|
||||
@ -179,9 +179,9 @@ func (o *stdOS) Readline(prompt string, complete func(line string, pos int) (new
|
||||
}
|
||||
}
|
||||
|
||||
if complete != nil {
|
||||
if opts.CompleteFn != nil {
|
||||
o.rl.Config.AutoComplete = autoCompleterFn(func(line []rune, pos int) (newLine [][]rune, length int) {
|
||||
names, shared := complete(string(line), pos)
|
||||
names, shared := opts.CompleteFn(string(line), pos)
|
||||
var runeNames [][]rune
|
||||
for _, name := range names {
|
||||
runeNames = append(runeNames, []rune(name[shared:]))
|
||||
@ -191,7 +191,7 @@ func (o *stdOS) Readline(prompt string, complete func(line string, pos int) (new
|
||||
})
|
||||
}
|
||||
|
||||
o.rl.SetPrompt(prompt)
|
||||
o.rl.SetPrompt(opts.Prompt)
|
||||
line, err := o.rl.Readline()
|
||||
if errors.Is(err, readline.ErrInterrupt) {
|
||||
return "", interp.ErrInterrupt
|
||||
|
@ -112,7 +112,7 @@ func decode(ctx context.Context, br bitio.ReaderAtSeeker, group Group, opts Opti
|
||||
if err := d.Value.WalkRootPreOrder(func(v *Value, rootV *Value, depth int, rootDepth int) error {
|
||||
minMaxRange = ranges.MinMax(minMaxRange, v.Range)
|
||||
v.Range.Start += decodeRange.Start
|
||||
v.RootBitBuf = br
|
||||
v.RootReader = br
|
||||
return nil
|
||||
}); err != nil {
|
||||
return nil, nil, err
|
||||
@ -165,7 +165,7 @@ func newDecoder(ctx context.Context, format Format, br bitio.ReaderAtSeeker, opt
|
||||
Value: &Value{
|
||||
Name: name,
|
||||
V: rootV,
|
||||
RootBitBuf: br,
|
||||
RootReader: br,
|
||||
Range: ranges.Range{Start: 0, Len: 0},
|
||||
IsRoot: opts.IsRoot,
|
||||
},
|
||||
@ -184,7 +184,7 @@ func (d *D) FieldDecoder(name string, bitBuf bitio.ReaderAtSeeker, v interface{}
|
||||
Name: name,
|
||||
V: v,
|
||||
Range: ranges.Range{Start: d.Pos(), Len: 0},
|
||||
RootBitBuf: bitBuf,
|
||||
RootReader: bitBuf,
|
||||
},
|
||||
Options: d.Options,
|
||||
|
||||
@ -289,7 +289,7 @@ func (d *D) FillGaps(r ranges.Range, namePrefix string) {
|
||||
for i, gap := range gaps {
|
||||
br, err := bitioextra.Range(d.bitBuf, gap.Start, gap.Len)
|
||||
if err != nil {
|
||||
d.IOPanic(err, "FillGaps: BitBufRange")
|
||||
d.IOPanic(err, "FillGaps: Range")
|
||||
}
|
||||
|
||||
v := &Value{
|
||||
@ -298,7 +298,7 @@ func (d *D) FillGaps(r ranges.Range, namePrefix string) {
|
||||
Actual: br,
|
||||
Unknown: true,
|
||||
},
|
||||
RootBitBuf: d.bitBuf,
|
||||
RootReader: d.bitBuf,
|
||||
Range: gap,
|
||||
}
|
||||
|
||||
@ -779,7 +779,7 @@ func (d *D) FieldArrayLoop(name string, condFn func() bool, fn func(d *D)) *D {
|
||||
func (d *D) FieldRangeFn(name string, firstBit int64, nBits int64, fn func() *Value) *Value {
|
||||
v := fn()
|
||||
v.Name = name
|
||||
v.RootBitBuf = d.bitBuf
|
||||
v.RootReader = d.bitBuf
|
||||
v.Range = ranges.Range{Start: firstBit, Len: nBits}
|
||||
d.AddChild(v)
|
||||
|
||||
@ -892,7 +892,7 @@ func (d *D) RangeFn(firstBit int64, nBits int64, fn func(d *D)) int64 {
|
||||
// TODO: refactor, similar to decode()
|
||||
if err := sd.Value.WalkRootPreOrder(func(v *Value, rootV *Value, depth int, rootDepth int) error {
|
||||
//v.Range.Start += firstBit
|
||||
v.RootBitBuf = d.Value.RootBitBuf
|
||||
v.RootReader = d.Value.RootReader
|
||||
endPos = mathextra.MaxInt64(endPos, v.Range.Stop())
|
||||
|
||||
return nil
|
||||
@ -1070,7 +1070,7 @@ func (d *D) FieldRootBitBuf(name string, br bitio.ReaderAtSeeker, sms ...scalar.
|
||||
v := &Value{}
|
||||
v.V = &scalar.S{Actual: br}
|
||||
v.Name = name
|
||||
v.RootBitBuf = br
|
||||
v.RootReader = br
|
||||
v.IsRoot = true
|
||||
v.Range = ranges.Range{Start: d.Pos(), Len: brLen}
|
||||
|
||||
@ -1164,7 +1164,7 @@ func (d *D) TryFieldValue(name string, fn func() (*Value, error)) (*Value, error
|
||||
v, err := fn()
|
||||
stop := d.Pos()
|
||||
v.Name = name
|
||||
v.RootBitBuf = d.bitBuf
|
||||
v.RootReader = d.bitBuf
|
||||
v.Range = ranges.Range{Start: start, Len: stop - start}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -27,7 +27,7 @@ type Value struct {
|
||||
V interface{} // scalar.S or Compound (array/struct)
|
||||
Index int // index in parent array/struct
|
||||
Range ranges.Range
|
||||
RootBitBuf bitio.ReaderAtSeeker
|
||||
RootReader bitio.ReaderAtSeeker
|
||||
IsRoot bool // TODO: rework?
|
||||
}
|
||||
|
||||
|
@ -16,29 +16,43 @@ import (
|
||||
"github.com/wader/fq/internal/progressreadseeker"
|
||||
"github.com/wader/fq/pkg/bitio"
|
||||
"github.com/wader/fq/pkg/ranges"
|
||||
"github.com/wader/gojq"
|
||||
)
|
||||
|
||||
func init() {
|
||||
functionRegisterFns = append(functionRegisterFns, func(i *Interp) []Function {
|
||||
return []Function{
|
||||
{"_tobitsrange", 0, 2, i._toBitsRange, nil},
|
||||
{"open", 0, 0, i._open, nil},
|
||||
{"_tobits", 3, 3, i._toBits, nil},
|
||||
{"open", 0, 0, nil, i._open},
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
type ToBuffer interface {
|
||||
ToBuffer() (Buffer, error)
|
||||
type ToBinary interface {
|
||||
ToBinary() (Binary, error)
|
||||
}
|
||||
|
||||
func toBitBuf(v interface{}) (bitio.ReaderAtSeeker, error) {
|
||||
return toBitBufEx(v, false)
|
||||
}
|
||||
|
||||
func toBitBufEx(v interface{}, inArray bool) (bitio.ReaderAtSeeker, error) {
|
||||
func toBinary(v interface{}) (Binary, error) {
|
||||
switch vv := v.(type) {
|
||||
case ToBuffer:
|
||||
bv, err := vv.ToBuffer()
|
||||
case ToBinary:
|
||||
return vv.ToBinary()
|
||||
default:
|
||||
br, err := toBitReader(v)
|
||||
if err != nil {
|
||||
return Binary{}, err
|
||||
}
|
||||
return newBinaryFromBitReader(br, 8, 0)
|
||||
}
|
||||
}
|
||||
|
||||
func toBitReader(v interface{}) (bitio.ReaderAtSeeker, error) {
|
||||
return toBitReaderEx(v, false)
|
||||
}
|
||||
|
||||
func toBitReaderEx(v interface{}, inArray bool) (bitio.ReaderAtSeeker, error) {
|
||||
switch vv := v.(type) {
|
||||
case ToBinary:
|
||||
bv, err := vv.ToBinary()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -53,7 +67,7 @@ func toBitBufEx(v interface{}, inArray bool) (bitio.ReaderAtSeeker, error) {
|
||||
|
||||
if inArray {
|
||||
if bi.Cmp(big.NewInt(255)) > 0 || bi.Cmp(big.NewInt(0)) < 0 {
|
||||
return nil, fmt.Errorf("buffer byte list must be bytes (0-255) got %v", bi)
|
||||
return nil, fmt.Errorf("byte in binary list must be bytes (0-255) got %v", bi)
|
||||
}
|
||||
n := bi.Uint64()
|
||||
b := [1]byte{byte(n)}
|
||||
@ -78,106 +92,95 @@ func toBitBufEx(v interface{}, inArray bool) (bitio.ReaderAtSeeker, error) {
|
||||
rr := make([]bitio.ReadAtSeeker, 0, len(vv))
|
||||
// TODO: optimize byte array case, flatten into one slice
|
||||
for _, e := range vv {
|
||||
eBR, eErr := toBitBufEx(e, true)
|
||||
eBR, eErr := toBitReaderEx(e, true)
|
||||
if eErr != nil {
|
||||
return nil, eErr
|
||||
}
|
||||
rr = append(rr, eBR)
|
||||
}
|
||||
|
||||
mb, err := bitio.NewMultiBitReader(rr...)
|
||||
mb, err := bitio.NewMultiReader(rr...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return mb, nil
|
||||
default:
|
||||
return nil, fmt.Errorf("value can't be a buffer")
|
||||
return nil, fmt.Errorf("value can't be a binary")
|
||||
}
|
||||
}
|
||||
|
||||
func toBuffer(v interface{}) (Buffer, error) {
|
||||
switch vv := v.(type) {
|
||||
case ToBuffer:
|
||||
return vv.ToBuffer()
|
||||
default:
|
||||
br, err := toBitBuf(v)
|
||||
if err != nil {
|
||||
return Buffer{}, err
|
||||
}
|
||||
return newBufferFromBuffer(br, 8)
|
||||
// note is used to implement tobytes* also
|
||||
func (i *Interp) _toBits(c interface{}, a []interface{}) interface{} {
|
||||
unit, ok := gojqextra.ToInt(a[0])
|
||||
if !ok {
|
||||
return gojqextra.FuncTypeError{Name: "_tobits", V: a[0]}
|
||||
}
|
||||
}
|
||||
|
||||
// note is used to implement tobytes*/0 also
|
||||
func (i *Interp) _toBitsRange(c interface{}, a []interface{}) interface{} {
|
||||
var unit int
|
||||
var r bool
|
||||
var ok bool
|
||||
|
||||
if len(a) >= 1 {
|
||||
unit, ok = gojqextra.ToInt(a[0])
|
||||
if !ok {
|
||||
return gojqextra.FuncTypeError{Name: "_tobitsrange", V: a[0]}
|
||||
}
|
||||
} else {
|
||||
unit = 1
|
||||
keepRange, ok := gojqextra.ToBoolean(a[1])
|
||||
if !ok {
|
||||
return gojqextra.FuncTypeError{Name: "_tobits", V: a[1]}
|
||||
}
|
||||
|
||||
if len(a) >= 2 {
|
||||
r, ok = gojqextra.ToBoolean(a[1])
|
||||
if !ok {
|
||||
return gojqextra.FuncTypeError{Name: "_tobitsrange", V: a[1]}
|
||||
}
|
||||
} else {
|
||||
r = true
|
||||
padToUnits, ok := gojqextra.ToInt(a[2])
|
||||
if !ok {
|
||||
return gojqextra.FuncTypeError{Name: "_tobits", V: a[2]}
|
||||
}
|
||||
|
||||
// TODO: unit > 8?
|
||||
|
||||
bv, err := toBuffer(c)
|
||||
bv, err := toBinary(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
bv.unit = unit
|
||||
|
||||
if !r {
|
||||
br, err := bv.toBuffer()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
bb, err := newBufferFromBuffer(br, unit)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return bb
|
||||
pad := int64(unit * padToUnits)
|
||||
if pad == 0 {
|
||||
pad = int64(unit)
|
||||
}
|
||||
|
||||
return bv
|
||||
bv.unit = unit
|
||||
bv.pad = (pad - bv.r.Len%pad) % pad
|
||||
|
||||
if keepRange {
|
||||
return bv
|
||||
}
|
||||
|
||||
br, err := bv.toReader()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
bb, err := newBinaryFromBitReader(br, bv.unit, bv.pad)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return bb
|
||||
}
|
||||
|
||||
type openFile struct {
|
||||
Buffer
|
||||
Binary
|
||||
filename string
|
||||
progressFn progressreadseeker.ProgressFn
|
||||
}
|
||||
|
||||
var _ Value = (*openFile)(nil)
|
||||
var _ ToBuffer = (*openFile)(nil)
|
||||
var _ ToBinary = (*openFile)(nil)
|
||||
|
||||
func (of *openFile) Display(w io.Writer, opts Options) error {
|
||||
_, err := fmt.Fprintf(w, "<openfile %q>\n", of.filename)
|
||||
return err
|
||||
}
|
||||
|
||||
func (of *openFile) ToBuffer() (Buffer, error) {
|
||||
return newBufferFromBuffer(of.br, 8)
|
||||
func (of *openFile) ToBinary() (Binary, error) {
|
||||
return newBinaryFromBitReader(of.br, 8, 0)
|
||||
}
|
||||
|
||||
// def open: #:: string| => buffer
|
||||
// def open: #:: string| => binary
|
||||
// opens a file for reading from filesystem
|
||||
// TODO: when to close? when br loses all refs? need to use finalizer somehow?
|
||||
func (i *Interp) _open(c interface{}, a []interface{}) interface{} {
|
||||
func (i *Interp) _open(c interface{}, a []interface{}) gojq.Iter {
|
||||
if i.evalContext.isCompleting {
|
||||
return gojq.NewIter()
|
||||
}
|
||||
|
||||
var err error
|
||||
var f fs.File
|
||||
var path string
|
||||
@ -189,11 +192,11 @@ func (i *Interp) _open(c interface{}, a []interface{}) interface{} {
|
||||
default:
|
||||
path, err = toString(c)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", path, err)
|
||||
return gojq.NewIter(fmt.Errorf("%s: %w", path, err))
|
||||
}
|
||||
f, err = i.os.FS().Open(path)
|
||||
if err != nil {
|
||||
return err
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
}
|
||||
|
||||
@ -203,7 +206,7 @@ func (i *Interp) _open(c interface{}, a []interface{}) interface{} {
|
||||
fFI, err := f.Stat()
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return err
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
|
||||
// ctxreadseeker is used to make sure any io calls can be canceled
|
||||
@ -221,7 +224,7 @@ func (i *Interp) _open(c interface{}, a []interface{}) interface{} {
|
||||
buf, err := ioutil.ReadAll(ctxreadseeker.New(i.evalContext.ctx, &ioextra.ReadErrSeeker{Reader: f}))
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return err
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
fRS = bytes.NewReader(buf)
|
||||
bEnd = int64(len(buf))
|
||||
@ -248,35 +251,37 @@ func (i *Interp) _open(c interface{}, a []interface{}) interface{} {
|
||||
|
||||
bbf.br = bitio.NewIOBitReadSeeker(aheadRs)
|
||||
if err != nil {
|
||||
return err
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
|
||||
return bbf
|
||||
return gojq.NewIter(bbf)
|
||||
}
|
||||
|
||||
var _ Value = Buffer{}
|
||||
var _ ToBuffer = Buffer{}
|
||||
var _ Value = Binary{}
|
||||
var _ ToBinary = Binary{}
|
||||
|
||||
type Buffer struct {
|
||||
type Binary struct {
|
||||
br bitio.ReaderAtSeeker
|
||||
r ranges.Range
|
||||
unit int
|
||||
pad int64
|
||||
}
|
||||
|
||||
func newBufferFromBuffer(br bitio.ReaderAtSeeker, unit int) (Buffer, error) {
|
||||
func newBinaryFromBitReader(br bitio.ReaderAtSeeker, unit int, pad int64) (Binary, error) {
|
||||
l, err := bitioextra.Len(br)
|
||||
if err != nil {
|
||||
return Buffer{}, err
|
||||
return Binary{}, err
|
||||
}
|
||||
|
||||
return Buffer{
|
||||
return Binary{
|
||||
br: br,
|
||||
r: ranges.Range{Start: 0, Len: l},
|
||||
unit: unit,
|
||||
pad: pad,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (b Buffer) toBytesBuffer(r ranges.Range) (*bytes.Buffer, error) {
|
||||
func (b Binary) toBytesBuffer(r ranges.Range) (*bytes.Buffer, error) {
|
||||
br, err := bitioextra.Range(b.br, r.Start, r.Len)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -289,9 +294,9 @@ func (b Buffer) toBytesBuffer(r ranges.Range) (*bytes.Buffer, error) {
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
func (Buffer) ExtType() string { return "buffer" }
|
||||
func (Binary) ExtType() string { return "binary" }
|
||||
|
||||
func (Buffer) ExtKeys() []string {
|
||||
func (Binary) ExtKeys() []string {
|
||||
return []string{
|
||||
"size",
|
||||
"start",
|
||||
@ -301,18 +306,18 @@ func (Buffer) ExtKeys() []string {
|
||||
}
|
||||
}
|
||||
|
||||
func (b Buffer) ToBuffer() (Buffer, error) {
|
||||
func (b Binary) ToBinary() (Binary, error) {
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (b Buffer) JQValueLength() interface{} {
|
||||
func (b Binary) JQValueLength() interface{} {
|
||||
return int(b.r.Len / int64(b.unit))
|
||||
}
|
||||
func (b Buffer) JQValueSliceLen() interface{} {
|
||||
func (b Binary) JQValueSliceLen() interface{} {
|
||||
return b.JQValueLength()
|
||||
}
|
||||
|
||||
func (b Buffer) JQValueIndex(index int) interface{} {
|
||||
func (b Binary) JQValueIndex(index int) interface{} {
|
||||
if index < 0 {
|
||||
return nil
|
||||
}
|
||||
@ -326,17 +331,17 @@ func (b Buffer) JQValueIndex(index int) interface{} {
|
||||
|
||||
return new(big.Int).Rsh(new(big.Int).SetBytes(buf.Bytes()), extraBits)
|
||||
}
|
||||
func (b Buffer) JQValueSlice(start int, end int) interface{} {
|
||||
func (b Binary) JQValueSlice(start int, end int) interface{} {
|
||||
rStart := int64(start * b.unit)
|
||||
rLen := int64((end - start) * b.unit)
|
||||
|
||||
return Buffer{
|
||||
return Binary{
|
||||
br: b.br,
|
||||
r: ranges.Range{Start: b.r.Start + rStart, Len: rLen},
|
||||
unit: b.unit,
|
||||
}
|
||||
}
|
||||
func (b Buffer) JQValueKey(name string) interface{} {
|
||||
func (b Binary) JQValueKey(name string) interface{} {
|
||||
switch name {
|
||||
case "size":
|
||||
return new(big.Int).SetInt64(b.r.Len / int64(b.unit))
|
||||
@ -353,28 +358,28 @@ func (b Buffer) JQValueKey(name string) interface{} {
|
||||
if b.unit == 1 {
|
||||
return b
|
||||
}
|
||||
return Buffer{br: b.br, r: b.r, unit: 1}
|
||||
return Binary{br: b.br, r: b.r, unit: 1}
|
||||
case "bytes":
|
||||
if b.unit == 8 {
|
||||
return b
|
||||
}
|
||||
return Buffer{br: b.br, r: b.r, unit: 8}
|
||||
return Binary{br: b.br, r: b.r, unit: 8}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (b Buffer) JQValueEach() interface{} {
|
||||
func (b Binary) JQValueEach() interface{} {
|
||||
return nil
|
||||
}
|
||||
func (b Buffer) JQValueType() string {
|
||||
return "buffer"
|
||||
func (b Binary) JQValueType() string {
|
||||
return "binary"
|
||||
}
|
||||
func (b Buffer) JQValueKeys() interface{} {
|
||||
return gojqextra.FuncTypeNameError{Name: "keys", Typ: "buffer"}
|
||||
func (b Binary) JQValueKeys() interface{} {
|
||||
return gojqextra.FuncTypeNameError{Name: "keys", Typ: "binary"}
|
||||
}
|
||||
func (b Buffer) JQValueHas(key interface{}) interface{} {
|
||||
return gojqextra.HasKeyTypeError{L: "buffer", R: fmt.Sprintf("%v", key)}
|
||||
func (b Binary) JQValueHas(key interface{}) interface{} {
|
||||
return gojqextra.HasKeyTypeError{L: "binary", R: fmt.Sprintf("%v", key)}
|
||||
}
|
||||
func (b Buffer) JQValueToNumber() interface{} {
|
||||
func (b Binary) JQValueToNumber() interface{} {
|
||||
buf, err := b.toBytesBuffer(b.r)
|
||||
if err != nil {
|
||||
return err
|
||||
@ -382,23 +387,23 @@ func (b Buffer) JQValueToNumber() interface{} {
|
||||
extraBits := uint((8 - b.r.Len%8) % 8)
|
||||
return new(big.Int).Rsh(new(big.Int).SetBytes(buf.Bytes()), extraBits)
|
||||
}
|
||||
func (b Buffer) JQValueToString() interface{} {
|
||||
func (b Binary) JQValueToString() interface{} {
|
||||
return b.JQValueToGoJQ()
|
||||
}
|
||||
func (b Buffer) JQValueToGoJQ() interface{} {
|
||||
func (b Binary) JQValueToGoJQ() interface{} {
|
||||
buf, err := b.toBytesBuffer(b.r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
func (b Buffer) JQValueUpdate(key interface{}, u interface{}, delpath bool) interface{} {
|
||||
return gojqextra.NonUpdatableTypeError{Key: fmt.Sprintf("%v", key), Typ: "buffer"}
|
||||
func (b Binary) JQValueUpdate(key interface{}, u interface{}, delpath bool) interface{} {
|
||||
return gojqextra.NonUpdatableTypeError{Key: fmt.Sprintf("%v", key), Typ: "binary"}
|
||||
}
|
||||
|
||||
func (b Buffer) Display(w io.Writer, opts Options) error {
|
||||
func (b Binary) Display(w io.Writer, opts Options) error {
|
||||
if opts.RawOutput {
|
||||
br, err := b.toBuffer()
|
||||
br, err := b.toReader()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -413,6 +418,13 @@ func (b Buffer) Display(w io.Writer, opts Options) error {
|
||||
return hexdump(w, b, opts)
|
||||
}
|
||||
|
||||
func (b Buffer) toBuffer() (bitio.ReaderAtSeeker, error) {
|
||||
return bitioextra.Range(b.br, b.r.Start, b.r.Len)
|
||||
func (b Binary) toReader() (bitio.ReaderAtSeeker, error) {
|
||||
br, err := bitioextra.Range(b.br, b.r.Start, b.r.Len)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if b.pad == 0 {
|
||||
return br, nil
|
||||
}
|
||||
return bitio.NewMultiReader(bitioextra.NewZeroAtSeeker(b.pad), br)
|
||||
}
|
8
pkg/interp/binary.jq
Normal file
8
pkg/interp/binary.jq
Normal file
@ -0,0 +1,8 @@
|
||||
def tobits: _tobits(1; false; 0);
|
||||
def tobytes: _tobits(8; false; 0);
|
||||
def tobitsrange: _tobits(1; true; 0);
|
||||
def tobytesrange: _tobits(8; true; 0);
|
||||
def tobits($pad): _tobits(1; false; $pad);
|
||||
def tobytes($pad): _tobits(8; false; $pad);
|
||||
def tobitsrange($pad): _tobits(1; true; $pad);
|
||||
def tobytesrange($pad): _tobits(8; true; $pad);
|
@ -1,4 +0,0 @@
|
||||
def tobitsrange: _tobitsrange;
|
||||
def tobytesrange: _tobitsrange(8);
|
||||
def tobits: _tobitsrange(1; false);
|
||||
def tobytes: _tobitsrange(8; false);
|
@ -44,7 +44,7 @@ func (err expectedExtkeyError) Error() string {
|
||||
// used by _isDecodeValue
|
||||
type DecodeValue interface {
|
||||
Value
|
||||
ToBuffer
|
||||
ToBinary
|
||||
|
||||
DecodeValue() *decode.Value
|
||||
}
|
||||
@ -162,7 +162,7 @@ func (i *Interp) _decode(c interface{}, a []interface{}) interface{} {
|
||||
c,
|
||||
opts.Progress,
|
||||
nil,
|
||||
ioextra.DiscardCtxWriter{Ctx: i.evalContext.ctx},
|
||||
EvalOpts{output: ioextra.DiscardCtxWriter{Ctx: i.evalContext.ctx}},
|
||||
)
|
||||
}
|
||||
lastProgress := time.Now()
|
||||
@ -188,7 +188,7 @@ func (i *Interp) _decode(c interface{}, a []interface{}) interface{} {
|
||||
}
|
||||
}
|
||||
|
||||
bv, err := toBuffer(c)
|
||||
bv, err := toBinary(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -276,7 +276,7 @@ func makeDecodeValue(dv *decode.Value) interface{} {
|
||||
switch vv := vv.Value().(type) {
|
||||
case bitio.ReaderAtSeeker:
|
||||
// is lazy so that in situations where the decode value is only used to
|
||||
// create another buffer we don't have to read and create a string, ex:
|
||||
// create another binary we don't have to read and create a string, ex:
|
||||
// .unknown0 | tobytes[1:] | ...
|
||||
return decodeValue{
|
||||
JQValue: &gojqextra.Lazy{
|
||||
@ -364,8 +364,8 @@ func (dvb decodeValueBase) DecodeValue() *decode.Value {
|
||||
}
|
||||
|
||||
func (dvb decodeValueBase) Display(w io.Writer, opts Options) error { return dump(dvb.dv, w, opts) }
|
||||
func (dvb decodeValueBase) ToBuffer() (Buffer, error) {
|
||||
return Buffer{br: dvb.dv.RootBitBuf, r: dvb.dv.InnerRange(), unit: 8}, nil
|
||||
func (dvb decodeValueBase) ToBinary() (Binary, error) {
|
||||
return Binary{br: dvb.dv.RootReader, r: dvb.dv.InnerRange(), unit: 8}, nil
|
||||
}
|
||||
func (decodeValueBase) ExtType() string { return "decode_value" }
|
||||
func (dvb decodeValueBase) ExtKeys() []string {
|
||||
@ -479,14 +479,14 @@ func (dvb decodeValueBase) JQValueKey(name string) interface{} {
|
||||
return nil
|
||||
}
|
||||
case "_bits":
|
||||
return Buffer{
|
||||
br: dv.RootBitBuf,
|
||||
return Binary{
|
||||
br: dv.RootReader,
|
||||
r: dv.Range,
|
||||
unit: 1,
|
||||
}
|
||||
case "_bytes":
|
||||
return Buffer{
|
||||
br: dv.RootBitBuf,
|
||||
return Binary{
|
||||
br: dv.RootReader,
|
||||
r: dv.Range,
|
||||
unit: 8,
|
||||
}
|
||||
@ -543,11 +543,11 @@ func (v decodeValue) JQValueToGoJQEx(optsFn func() Options) interface{} {
|
||||
return v.JQValueToGoJQ()
|
||||
}
|
||||
|
||||
bv, err := v.decodeValueBase.ToBuffer()
|
||||
bv, err := v.decodeValueBase.ToBinary()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
br, err := bv.toBuffer()
|
||||
br, err := bv.toReader()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -219,7 +219,7 @@ func dumpEx(v *decode.Value, buf []byte, cw *columnwriter.Writer, depth int, roo
|
||||
printErrs(depth, valueErr)
|
||||
}
|
||||
|
||||
rootBitLen, err := bitioextra.Len(rootV.RootBitBuf)
|
||||
rootBitLen, err := bitioextra.Len(rootV.RootReader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -267,7 +267,7 @@ func dumpEx(v *decode.Value, buf []byte, cw *columnwriter.Writer, depth int, roo
|
||||
cfmt(colAddr, "%s%s\n",
|
||||
rootIndent, deco.DumpAddr.F(mathextra.PadFormatInt(startLineByte, opts.AddrBase, true, addrWidth)))
|
||||
|
||||
vBR, err := bitioextra.Range(rootV.RootBitBuf, startByte*8, displaySizeBits)
|
||||
vBR, err := bitioextra.Range(rootV.RootReader, startByte*8, displaySizeBits)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -364,8 +364,8 @@ func dump(v *decode.Value, w io.Writer, opts Options) error {
|
||||
}))
|
||||
}
|
||||
|
||||
func hexdump(w io.Writer, bv Buffer, opts Options) error {
|
||||
br, err := bv.toBuffer()
|
||||
func hexdump(w io.Writer, bv Binary, opts Options) error {
|
||||
br, err := bv.toReader()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -389,7 +389,7 @@ func hexdump(w io.Writer, bv Buffer, opts Options) error {
|
||||
// TODO: hack
|
||||
V: &scalar.S{Actual: br},
|
||||
Range: bv.r,
|
||||
RootBitBuf: biib,
|
||||
RootReader: biib,
|
||||
},
|
||||
w,
|
||||
opts,
|
||||
|
@ -23,26 +23,26 @@ func init() {
|
||||
return []Function{
|
||||
{"_hexdump", 1, 1, nil, i._hexdump},
|
||||
|
||||
{"hex", 0, 0, makeStringBitBufTransformFn(
|
||||
{"hex", 0, 0, makeStringBinaryTransformFn(
|
||||
func(r io.Reader) (io.Reader, error) { return hex.NewDecoder(r), nil },
|
||||
func(r io.Writer) (io.Writer, error) { return hex.NewEncoder(r), nil },
|
||||
), nil},
|
||||
|
||||
{"base64", 0, 0, makeStringBitBufTransformFn(
|
||||
{"base64", 0, 0, makeStringBinaryTransformFn(
|
||||
func(r io.Reader) (io.Reader, error) { return base64.NewDecoder(base64.StdEncoding, r), nil },
|
||||
func(r io.Writer) (io.Writer, error) { return base64.NewEncoder(base64.StdEncoding, r), nil },
|
||||
), nil},
|
||||
{"rawbase64", 0, 0, makeStringBitBufTransformFn(
|
||||
{"rawbase64", 0, 0, makeStringBinaryTransformFn(
|
||||
func(r io.Reader) (io.Reader, error) { return base64.NewDecoder(base64.RawURLEncoding, r), nil },
|
||||
func(r io.Writer) (io.Writer, error) { return base64.NewEncoder(base64.RawURLEncoding, r), nil },
|
||||
), nil},
|
||||
|
||||
{"urlbase64", 0, 0, makeStringBitBufTransformFn(
|
||||
{"urlbase64", 0, 0, makeStringBinaryTransformFn(
|
||||
func(r io.Reader) (io.Reader, error) { return base64.NewDecoder(base64.URLEncoding, r), nil },
|
||||
func(r io.Writer) (io.Writer, error) { return base64.NewEncoder(base64.URLEncoding, r), nil },
|
||||
), nil},
|
||||
|
||||
{"nal_unescape", 0, 0, makeBitBufTransformFn(func(r io.Reader) (io.Reader, error) {
|
||||
{"nal_unescape", 0, 0, makeBinaryTransformFn(func(r io.Reader) (io.Reader, error) {
|
||||
return &decode.NALUnescapeReader{Reader: r}, nil
|
||||
}), nil},
|
||||
|
||||
@ -57,15 +57,15 @@ func init() {
|
||||
})
|
||||
}
|
||||
|
||||
// transform byte string <-> buffer using fn:s
|
||||
func makeStringBitBufTransformFn(
|
||||
// transform byte string <-> binary using fn:s
|
||||
func makeStringBinaryTransformFn(
|
||||
decodeFn func(r io.Reader) (io.Reader, error),
|
||||
encodeFn func(w io.Writer) (io.Writer, error),
|
||||
) func(c interface{}, a []interface{}) interface{} {
|
||||
return func(c interface{}, a []interface{}) interface{} {
|
||||
switch c := c.(type) {
|
||||
case string:
|
||||
br, err := toBitBuf(c)
|
||||
br, err := toBitReader(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -80,13 +80,13 @@ func makeStringBitBufTransformFn(
|
||||
return err
|
||||
}
|
||||
|
||||
bb, err := newBufferFromBuffer(bitio.NewBitReader(buf.Bytes(), -1), 8)
|
||||
bb, err := newBinaryFromBitReader(bitio.NewBitReader(buf.Bytes(), -1), 8, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return bb
|
||||
default:
|
||||
br, err := toBitBuf(c)
|
||||
br, err := toBitReader(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -110,10 +110,10 @@ func makeStringBitBufTransformFn(
|
||||
}
|
||||
}
|
||||
|
||||
// transform to buffer using fn
|
||||
func makeBitBufTransformFn(fn func(r io.Reader) (io.Reader, error)) func(c interface{}, a []interface{}) interface{} {
|
||||
// transform to binary using fn
|
||||
func makeBinaryTransformFn(fn func(r io.Reader) (io.Reader, error)) func(c interface{}, a []interface{}) interface{} {
|
||||
return func(c interface{}, a []interface{}) interface{} {
|
||||
inBR, err := toBitBuf(c)
|
||||
inBR, err := toBitReader(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -130,7 +130,7 @@ func makeBitBufTransformFn(fn func(r io.Reader) (io.Reader, error)) func(c inter
|
||||
|
||||
outBR := bitio.NewBitReader(outBuf.Bytes(), -1)
|
||||
|
||||
bb, err := newBufferFromBuffer(outBR, 8)
|
||||
bb, err := newBinaryFromBitReader(outBR, 8, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -138,10 +138,10 @@ func makeBitBufTransformFn(fn func(r io.Reader) (io.Reader, error)) func(c inter
|
||||
}
|
||||
}
|
||||
|
||||
// transform to buffer using fn
|
||||
// transform to binary using fn
|
||||
func makeHashFn(fn func() (hash.Hash, error)) func(c interface{}, a []interface{}) interface{} {
|
||||
return func(c interface{}, a []interface{}) interface{} {
|
||||
inBR, err := toBitBuf(c)
|
||||
inBR, err := toBitReader(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -156,7 +156,7 @@ func makeHashFn(fn func() (hash.Hash, error)) func(c interface{}, a []interface{
|
||||
|
||||
outBR := bitio.NewBitReader(h.Sum(nil), -1)
|
||||
|
||||
bb, err := newBufferFromBuffer(outBR, 8)
|
||||
bb, err := newBinaryFromBitReader(outBR, 8, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -234,7 +234,7 @@ func (i *Interp) aesCtr(c interface{}, a []interface{}) interface{} {
|
||||
ivBytes = make([]byte, block.BlockSize())
|
||||
}
|
||||
|
||||
br, err := toBitBuf(c)
|
||||
br, err := toBitReader(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -245,7 +245,7 @@ func (i *Interp) aesCtr(c interface{}, a []interface{}) interface{} {
|
||||
return err
|
||||
}
|
||||
|
||||
bb, err := newBufferFromBuffer(bitio.NewBitReader(buf.Bytes(), -1), 8)
|
||||
bb, err := newBinaryFromBitReader(bitio.NewBitReader(buf.Bytes(), -1), 8, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -254,7 +254,7 @@ func (i *Interp) aesCtr(c interface{}, a []interface{}) interface{} {
|
||||
|
||||
func (i *Interp) _hexdump(c interface{}, a []interface{}) gojq.Iter {
|
||||
opts := i.Options(a[0])
|
||||
bv, err := toBuffer(c)
|
||||
bv, err := toBinary(c)
|
||||
if err != nil {
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
|
@ -202,22 +202,9 @@ def table(colmap; render):
|
||||
)
|
||||
end;
|
||||
|
||||
# convert number to array of bytes
|
||||
def number_to_bytes($bits):
|
||||
def _number_to_bytes($d):
|
||||
if . > 0 then
|
||||
. % $d, (intdiv(.; $d) | _number_to_bytes($d))
|
||||
else
|
||||
empty
|
||||
end;
|
||||
if . == 0 then [0]
|
||||
else [_number_to_bytes(pow(2; $bits) | _to_int)] | reverse
|
||||
end;
|
||||
def number_to_bytes:
|
||||
number_to_bytes(8);
|
||||
|
||||
def from_radix($base; $table):
|
||||
( split("")
|
||||
def fromradix($base; $table):
|
||||
( if type != "string" then error("cannot fromradix convert: \(.)") end
|
||||
| split("")
|
||||
| reverse
|
||||
| map($table[.])
|
||||
| if . == null then error("invalid char \(.)") end
|
||||
@ -229,75 +216,45 @@ def from_radix($base; $table):
|
||||
)
|
||||
| .[1]
|
||||
);
|
||||
|
||||
def to_radix($base; $table):
|
||||
if . == 0 then "0"
|
||||
else
|
||||
( [ recurse(if . > 0 then intdiv(.; $base) else empty end) | . % $base]
|
||||
| reverse
|
||||
| .[1:]
|
||||
| if $base <= ($table | length) then
|
||||
map($table[.]) | join("")
|
||||
else
|
||||
error("base too large")
|
||||
end
|
||||
)
|
||||
end;
|
||||
|
||||
def radix($base; $to_table; $from_table):
|
||||
if . | type == "number" then to_radix($base; $to_table)
|
||||
elif . | type == "string" then from_radix($base; $from_table)
|
||||
else error("needs to be number or string")
|
||||
end;
|
||||
|
||||
def radix2: radix(2; "01"; {"0": 0, "1": 1});
|
||||
def radix8: radix(8; "01234567"; {"0": 0, "1": 1, "2": 2, "3": 3,"4": 4, "5": 5, "6": 6, "7": 7});
|
||||
def radix16: radix(16; "0123456789abcdef"; {
|
||||
def fromradix($base):
|
||||
fromradix($base; {
|
||||
"0": 0, "1": 1, "2": 2, "3": 3,"4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9,
|
||||
"a": 10, "b": 11, "c": 12, "d": 13, "e": 14, "f": 15
|
||||
});
|
||||
def radix62: radix(62; "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; {
|
||||
"0": 0, "1": 1, "2": 2, "3": 3,"4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9,
|
||||
"A": 10, "B": 11, "C": 12, "D": 13, "E": 14, "F": 15, "G": 16,
|
||||
"H": 17, "I": 18, "J": 19, "K": 20, "L": 21, "M": 22, "N": 23,
|
||||
"O": 24, "P": 25, "Q": 26, "R": 27, "S": 28, "T": 29, "U": 30,
|
||||
"V": 31, "W": 32, "X": 33, "Y": 34, "Z": 35,
|
||||
"a": 36, "b": 37, "c": 38, "d": 39, "e": 40, "f": 41, "g": 42,
|
||||
"h": 43, "i": 44, "j": 45, "k": 46, "l": 47, "m": 48, "n": 49,
|
||||
"o": 50, "p": 51, "q": 52, "r": 53, "s": 54, "t": 55, "u": 56,
|
||||
"v": 57, "w": 58, "x": 59, "y": 60, "z": 61
|
||||
});
|
||||
def radix62: radix(62; "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; {
|
||||
"A": 0, "B": 1, "C": 2, "D": 3, "E": 4, "F": 5, "G": 6,
|
||||
"H": 7, "I": 8, "J": 9, "K": 10, "L": 11, "M": 12, "N": 13,
|
||||
"O": 14, "P": 15, "Q": 16, "R": 17, "S": 18, "T": 19, "U": 20,
|
||||
"V": 21, "W": 22, "X": 23, "Y": 24, "Z": 25,
|
||||
"a": 26, "b": 27, "c": 28, "d": 29, "e": 30, "f": 31, "g": 32,
|
||||
"h": 33, "i": 34, "j": 35, "k": 36, "l": 37, "m": 38, "n": 39,
|
||||
"o": 40, "p": 41, "q": 42, "r": 43, "s": 44, "t": 45, "u": 46,
|
||||
"v": 47, "w": 48, "x": 49, "y": 50, "z": 51,
|
||||
"0": 52, "1": 53, "2": 54, "3": 55, "4": 56, "5": 57, "6": 58, "7": 59, "8": 60, "9": 61
|
||||
});
|
||||
def radix64: radix(64; "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; {
|
||||
"A": 0, "B": 1, "C": 2, "D": 3, "E": 4, "F": 5, "G": 6,
|
||||
"H": 7, "I": 8, "J": 9, "K": 10, "L": 11, "M": 12, "N": 13,
|
||||
"O": 14, "P": 15, "Q": 16, "R": 17, "S": 18, "T": 19, "U": 20,
|
||||
"V": 21, "W": 22, "X": 23, "Y": 24, "Z": 25,
|
||||
"a": 26, "b": 27, "c": 28, "d": 29, "e": 30, "f": 31, "g": 32,
|
||||
"h": 33, "i": 34, "j": 35, "k": 36, "l": 37, "m": 38, "n": 39,
|
||||
"o": 40, "p": 41, "q": 42, "r": 43, "s": 44, "t": 45, "u": 46,
|
||||
"v": 47, "w": 48, "x": 49, "y": 50, "z": 51,
|
||||
"0": 52, "1": 53, "2": 54, "3": 55, "4": 56, "5": 57, "6": 58, "7": 59, "8": 60, "9": 61,
|
||||
"+": 62, "/": 63
|
||||
"a": 10, "b": 11, "c": 12, "d": 13, "e": 14, "f": 15, "g": 16,
|
||||
"h": 17, "i": 18, "j": 19, "k": 20, "l": 21, "m": 22, "n": 23,
|
||||
"o": 24, "p": 25, "q": 26, "r": 27, "s": 28, "t": 29, "u": 30,
|
||||
"v": 31, "w": 32, "x": 33, "y": 34, "z": 35,
|
||||
"A": 36, "B": 37, "C": 38, "D": 39, "E": 40, "F": 41, "G": 42,
|
||||
"H": 43, "I": 44, "J": 45, "K": 46, "L": 47, "M": 48, "N": 49,
|
||||
"O": 50, "P": 51, "Q": 52, "R": 53, "S": 54, "T": 55, "U": 56,
|
||||
"V": 57, "W": 58, "X": 59, "Y": 60, "Z": 61,
|
||||
"@": 62, "_": 63,
|
||||
});
|
||||
|
||||
def toradix($base; $table):
|
||||
( if type != "number" then error("cannot toradix convert: \(.)") end
|
||||
| if . == 0 then "0"
|
||||
else
|
||||
( [ recurse(if . > 0 then intdiv(.; $base) else empty end) | . % $base]
|
||||
| reverse
|
||||
| .[1:]
|
||||
| if $base <= ($table | length) then
|
||||
map($table[.]) | join("")
|
||||
else
|
||||
error("base too large")
|
||||
end
|
||||
)
|
||||
end
|
||||
);
|
||||
def toradix($base):
|
||||
toradix($base; "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ@_");
|
||||
|
||||
# TODO: rename keys and add more, ascii/utf8/utf16/codepoint name?, le/be, signed/unsigned?
|
||||
def iprint:
|
||||
{
|
||||
bin: "0b\(radix2)",
|
||||
oct: "0o\(radix8)",
|
||||
bin: "0b\(toradix(2))",
|
||||
oct: "0o\(toradix(8))",
|
||||
dec: "\(.)",
|
||||
hex: "0x\(radix16)",
|
||||
hex: "0x\(toradix(16))",
|
||||
str: (try ([.] | implode) catch null),
|
||||
};
|
||||
|
||||
@ -348,3 +305,14 @@ def topem($label):
|
||||
| join("\n")
|
||||
);
|
||||
def topem: topem("");
|
||||
|
||||
def paste:
|
||||
if _is_completing | not then
|
||||
( [ _repeat_break(
|
||||
try _stdin(64*1024)
|
||||
catch if . == "eof" then error("break") end
|
||||
)
|
||||
]
|
||||
| join("")
|
||||
)
|
||||
end;
|
||||
|
@ -45,33 +45,4 @@ include "funcs";
|
||||
["1234", 2, ["12","34"]],
|
||||
["1234", 3, ["123","4"]]
|
||||
][] | . as $t | assert("\($t[0]) | chunk(\($t[1]))"; $t[2]; $t[0] | chunk($t[1])))
|
||||
,
|
||||
([
|
||||
# 0xfffffffffffffffffffffffffffffffffffffffffffff
|
||||
[1532495540865888858358347027150309183618739122183602175, 8, [
|
||||
15,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255,
|
||||
255
|
||||
]]
|
||||
][] | . as $t | assert("\($t[0]) | number_to_bytes(\($t[1]))"; $t[2]; $t[0] | number_to_bytes($t[1])))
|
||||
)
|
||||
|
@ -11,6 +11,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"io/ioutil"
|
||||
"math/big"
|
||||
"path"
|
||||
"strconv"
|
||||
@ -22,6 +23,7 @@ import (
|
||||
"github.com/wader/fq/internal/bitioextra"
|
||||
"github.com/wader/fq/internal/colorjson"
|
||||
"github.com/wader/fq/internal/ctxstack"
|
||||
"github.com/wader/fq/internal/gojqextra"
|
||||
"github.com/wader/fq/internal/ioextra"
|
||||
"github.com/wader/fq/internal/mathextra"
|
||||
"github.com/wader/fq/internal/pos"
|
||||
@ -35,7 +37,7 @@ import (
|
||||
//go:embed interp.jq
|
||||
//go:embed internal.jq
|
||||
//go:embed options.jq
|
||||
//go:embed buffer.jq
|
||||
//go:embed binary.jq
|
||||
//go:embed decode.jq
|
||||
//go:embed match.jq
|
||||
//go:embed funcs.jq
|
||||
@ -53,11 +55,11 @@ var functionRegisterFns []func(i *Interp) []Function
|
||||
func init() {
|
||||
functionRegisterFns = append(functionRegisterFns, func(i *Interp) []Function {
|
||||
return []Function{
|
||||
{"_readline", 0, 2, i.readline, nil},
|
||||
{"_readline", 0, 1, nil, i._readline},
|
||||
{"eval", 1, 2, nil, i.eval},
|
||||
{"_stdin", 0, 0, nil, i.makeStdioFn(i.os.Stdin())},
|
||||
{"_stdout", 0, 0, nil, i.makeStdioFn(i.os.Stdout())},
|
||||
{"_stderr", 0, 0, nil, i.makeStdioFn(i.os.Stderr())},
|
||||
{"_stdin", 0, 1, nil, i.makeStdioFn("stdin", i.os.Stdin())},
|
||||
{"_stdout", 0, 0, nil, i.makeStdioFn("stdout", i.os.Stdout())},
|
||||
{"_stderr", 0, 0, nil, i.makeStdioFn("stderr", i.os.Stderr())},
|
||||
{"_extkeys", 0, 0, i._extKeys, nil},
|
||||
{"_exttype", 0, 0, i._extType, nil},
|
||||
{"_global_state", 0, 1, i.makeStateFn(i.state), nil},
|
||||
@ -65,6 +67,7 @@ func init() {
|
||||
{"_display", 1, 1, nil, i._display},
|
||||
{"_can_display", 0, 0, i._canDisplay, nil},
|
||||
{"_print_color_json", 0, 1, nil, i._printColorJSON},
|
||||
{"_is_completing", 0, 1, i._isCompleting, nil},
|
||||
}
|
||||
})
|
||||
}
|
||||
@ -95,7 +98,7 @@ func (ce compileError) Value() interface{} {
|
||||
func (ce compileError) Error() string {
|
||||
filename := ce.filename
|
||||
if filename == "" {
|
||||
filename = "src"
|
||||
filename = "expr"
|
||||
}
|
||||
return fmt.Sprintf("%s:%d:%d: %s: %s", filename, ce.pos.Line, ce.pos.Column, ce.what, ce.err.Error())
|
||||
}
|
||||
@ -133,6 +136,11 @@ type Platform struct {
|
||||
Arch string
|
||||
}
|
||||
|
||||
type ReadlineOpts struct {
|
||||
Prompt string
|
||||
CompleteFn func(line string, pos int) (newLine []string, shared int)
|
||||
}
|
||||
|
||||
type OS interface {
|
||||
Platform() Platform
|
||||
Stdin() Input
|
||||
@ -144,7 +152,7 @@ type OS interface {
|
||||
ConfigDir() (string, error)
|
||||
// FS.File returned by FS().Open() can optionally implement io.Seeker
|
||||
FS() fs.FS
|
||||
Readline(prompt string, complete func(line string, pos int) (newLine []string, shared int)) (string, error)
|
||||
Readline(opts ReadlineOpts) (string, error)
|
||||
History() ([]string, error)
|
||||
}
|
||||
|
||||
@ -270,7 +278,7 @@ func toBigInt(v interface{}) (*big.Int, error) {
|
||||
func toBytes(v interface{}) ([]byte, error) {
|
||||
switch v := v.(type) {
|
||||
default:
|
||||
br, err := toBitBuf(v)
|
||||
br, err := toBitReader(v)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("value is not bytes")
|
||||
}
|
||||
@ -283,14 +291,14 @@ func toBytes(v interface{}) ([]byte, error) {
|
||||
}
|
||||
}
|
||||
|
||||
func queryErrorPosition(src string, v error) pos.Pos {
|
||||
func queryErrorPosition(expr string, v error) pos.Pos {
|
||||
var offset int
|
||||
|
||||
if tokIf, ok := v.(interface{ Token() (string, int) }); ok { //nolint:errorlint
|
||||
_, offset = tokIf.Token()
|
||||
}
|
||||
if offset >= 0 {
|
||||
return pos.NewFromOffset(src, offset)
|
||||
return pos.NewFromOffset(expr, offset)
|
||||
}
|
||||
return pos.Pos{}
|
||||
}
|
||||
@ -317,17 +325,18 @@ const (
|
||||
)
|
||||
|
||||
type evalContext struct {
|
||||
ctx context.Context
|
||||
output io.Writer
|
||||
ctx context.Context
|
||||
output io.Writer
|
||||
isCompleting bool
|
||||
}
|
||||
|
||||
type Interp struct {
|
||||
registry *registry.Registry
|
||||
os OS
|
||||
initFqQuery *gojq.Query
|
||||
initQuery *gojq.Query
|
||||
includeCache map[string]*gojq.Query
|
||||
interruptStack *ctxstack.Stack
|
||||
// global state, is ref as Interp i cloned per eval
|
||||
// global state, is ref as Interp is cloned per eval
|
||||
state *interface{}
|
||||
|
||||
// new for each run, other values are copied by value
|
||||
@ -343,7 +352,7 @@ func New(os OS, registry *registry.Registry) (*Interp, error) {
|
||||
}
|
||||
|
||||
i.includeCache = map[string]*gojq.Query{}
|
||||
i.initFqQuery, err = gojq.Parse(initSource)
|
||||
i.initQuery, err = gojq.Parse(initSource)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("init:%s: %w", queryErrorPosition(initSource, err), err)
|
||||
}
|
||||
@ -380,7 +389,7 @@ func (i *Interp) Main(ctx context.Context, output Output, versionStr string) err
|
||||
"arch": platform.Arch,
|
||||
}
|
||||
|
||||
iter, err := i.EvalFunc(ctx, input, "_main", nil, output)
|
||||
iter, err := i.EvalFunc(ctx, input, "_main", nil, EvalOpts{output: output})
|
||||
if err != nil {
|
||||
fmt.Fprintln(i.os.Stderr(), err)
|
||||
return err
|
||||
@ -414,28 +423,24 @@ func (i *Interp) Main(ctx context.Context, output Output, versionStr string) err
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *Interp) readline(c interface{}, a []interface{}) interface{} {
|
||||
func (i *Interp) _readline(c interface{}, a []interface{}) gojq.Iter {
|
||||
if i.evalContext.isCompleting {
|
||||
return gojq.NewIter()
|
||||
}
|
||||
|
||||
var opts struct {
|
||||
Promopt string `mapstructure:"prompt"`
|
||||
Complete string `mapstructure:"complete"`
|
||||
Timeout float64 `mapstructure:"timeout"`
|
||||
}
|
||||
|
||||
var err error
|
||||
prompt := ""
|
||||
|
||||
if len(a) > 0 {
|
||||
prompt, err = toString(a[0])
|
||||
if err != nil {
|
||||
return fmt.Errorf("prompt: %w", err)
|
||||
}
|
||||
}
|
||||
if len(a) > 1 {
|
||||
_ = mapstructure.Decode(a[1], &opts)
|
||||
_ = mapstructure.Decode(a[0], &opts)
|
||||
}
|
||||
|
||||
src, err := i.os.Readline(
|
||||
prompt,
|
||||
func(line string, pos int) (newLine []string, shared int) {
|
||||
expr, err := i.os.Readline(ReadlineOpts{
|
||||
Prompt: opts.Promopt,
|
||||
CompleteFn: func(line string, pos int) (newLine []string, shared int) {
|
||||
completeCtx := i.evalContext.ctx
|
||||
if opts.Timeout > 0 {
|
||||
var completeCtxCancelFn context.CancelFunc
|
||||
@ -450,7 +455,10 @@ func (i *Interp) readline(c interface{}, a []interface{}) interface{} {
|
||||
c,
|
||||
opts.Complete,
|
||||
[]interface{}{line, pos},
|
||||
ioextra.DiscardCtxWriter{Ctx: completeCtx},
|
||||
EvalOpts{
|
||||
output: ioextra.DiscardCtxWriter{Ctx: completeCtx},
|
||||
isCompleting: true,
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
return nil, pos, err
|
||||
@ -485,24 +493,24 @@ func (i *Interp) readline(c interface{}, a []interface{}) interface{} {
|
||||
|
||||
return names, shared
|
||||
},
|
||||
)
|
||||
})
|
||||
|
||||
if errors.Is(err, ErrInterrupt) {
|
||||
return valueError{"interrupt"}
|
||||
return gojq.NewIter(valueError{"interrupt"})
|
||||
} else if errors.Is(err, ErrEOF) {
|
||||
return valueError{"eof"}
|
||||
return gojq.NewIter(valueError{"eof"})
|
||||
} else if err != nil {
|
||||
return err
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
|
||||
return src
|
||||
return gojq.NewIter(expr)
|
||||
}
|
||||
|
||||
func (i *Interp) eval(c interface{}, a []interface{}) gojq.Iter {
|
||||
var err error
|
||||
src, err := toString(a[0])
|
||||
expr, err := toString(a[0])
|
||||
if err != nil {
|
||||
return gojq.NewIter(fmt.Errorf("src: %w", err))
|
||||
return gojq.NewIter(fmt.Errorf("expr: %w", err))
|
||||
}
|
||||
var filenameHint string
|
||||
if len(a) >= 2 {
|
||||
@ -512,7 +520,10 @@ func (i *Interp) eval(c interface{}, a []interface{}) gojq.Iter {
|
||||
}
|
||||
}
|
||||
|
||||
iter, err := i.Eval(i.evalContext.ctx, c, src, filenameHint, i.evalContext.output)
|
||||
iter, err := i.Eval(i.evalContext.ctx, c, expr, EvalOpts{
|
||||
filename: filenameHint,
|
||||
output: i.evalContext.output,
|
||||
})
|
||||
if err != nil {
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
@ -547,25 +558,57 @@ func (i *Interp) makeStateFn(state *interface{}) func(c interface{}, a []interfa
|
||||
}
|
||||
}
|
||||
|
||||
func (i *Interp) makeStdioFn(t Terminal) func(c interface{}, a []interface{}) gojq.Iter {
|
||||
func (i *Interp) makeStdioFn(name string, t Terminal) func(c interface{}, a []interface{}) gojq.Iter {
|
||||
return func(c interface{}, a []interface{}) gojq.Iter {
|
||||
if c == nil {
|
||||
switch {
|
||||
case len(a) == 1:
|
||||
if i.evalContext.isCompleting {
|
||||
return gojq.NewIter("")
|
||||
}
|
||||
|
||||
r, ok := t.(io.Reader)
|
||||
if !ok {
|
||||
return gojq.NewIter(fmt.Errorf("%s is not readable", name))
|
||||
}
|
||||
l, ok := gojqextra.ToInt(a[0])
|
||||
if !ok {
|
||||
return gojq.NewIter(gojqextra.FuncTypeError{Name: name, V: a[0]})
|
||||
}
|
||||
|
||||
buf := make([]byte, l)
|
||||
n, err := io.ReadFull(r, buf)
|
||||
s := string(buf[0:n])
|
||||
|
||||
vs := []interface{}{s}
|
||||
switch {
|
||||
case errors.Is(err, io.EOF), errors.Is(err, io.ErrUnexpectedEOF):
|
||||
vs = append(vs, valueError{"eof"})
|
||||
default:
|
||||
vs = append(vs, err)
|
||||
}
|
||||
|
||||
return gojq.NewIter(vs...)
|
||||
case c == nil:
|
||||
w, h := t.Size()
|
||||
return gojq.NewIter(map[string]interface{}{
|
||||
"is_terminal": t.IsTerminal(),
|
||||
"width": w,
|
||||
"height": h,
|
||||
})
|
||||
}
|
||||
default:
|
||||
if i.evalContext.isCompleting {
|
||||
return gojq.NewIter()
|
||||
}
|
||||
|
||||
if w, ok := t.(io.Writer); ok {
|
||||
w, ok := t.(io.Writer)
|
||||
if !ok {
|
||||
return gojq.NewIter(fmt.Errorf("%v: it not writeable", c))
|
||||
}
|
||||
if _, err := fmt.Fprint(w, c); err != nil {
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
return gojq.NewIter()
|
||||
}
|
||||
|
||||
return gojq.NewIter(fmt.Errorf("%v: it not writeable", c))
|
||||
}
|
||||
}
|
||||
|
||||
@ -595,6 +638,11 @@ func (i *Interp) _display(c interface{}, a []interface{}) gojq.Iter {
|
||||
}
|
||||
}
|
||||
|
||||
func (i *Interp) _canDisplay(c interface{}, a []interface{}) interface{} {
|
||||
_, ok := c.(Display)
|
||||
return ok
|
||||
}
|
||||
|
||||
func (i *Interp) _printColorJSON(c interface{}, a []interface{}) gojq.Iter {
|
||||
opts := i.Options(a[0])
|
||||
|
||||
@ -609,64 +657,77 @@ func (i *Interp) _printColorJSON(c interface{}, a []interface{}) gojq.Iter {
|
||||
return gojq.NewIter()
|
||||
}
|
||||
|
||||
func (i *Interp) _canDisplay(c interface{}, a []interface{}) interface{} {
|
||||
_, ok := c.(Display)
|
||||
return ok
|
||||
func (i *Interp) _isCompleting(c interface{}, a []interface{}) interface{} {
|
||||
return i.evalContext.isCompleting
|
||||
}
|
||||
|
||||
type pathResolver struct {
|
||||
prefix string
|
||||
open func(filename string) (io.ReadCloser, error)
|
||||
open func(filename string) (io.ReadCloser, string, error)
|
||||
}
|
||||
|
||||
func (i *Interp) lookupPathResolver(filename string) (pathResolver, bool) {
|
||||
func (i *Interp) lookupPathResolver(filename string) (pathResolver, error) {
|
||||
configDir, err := i.os.ConfigDir()
|
||||
if err != nil {
|
||||
return pathResolver{}, err
|
||||
}
|
||||
|
||||
resolvePaths := []pathResolver{
|
||||
{
|
||||
"@builtin/",
|
||||
func(filename string) (io.ReadCloser, error) { return builtinFS.Open(filename) },
|
||||
},
|
||||
{
|
||||
"@config/", func(filename string) (io.ReadCloser, error) {
|
||||
configDir, err := i.os.ConfigDir()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return i.os.FS().Open(path.Join(configDir, filename))
|
||||
func(filename string) (io.ReadCloser, string, error) {
|
||||
f, err := builtinFS.Open(filename)
|
||||
return f, "@builtin/" + filename, err
|
||||
},
|
||||
},
|
||||
{
|
||||
"", func(filename string) (io.ReadCloser, error) {
|
||||
"@config/", func(filename string) (io.ReadCloser, string, error) {
|
||||
p := path.Join(configDir, filename)
|
||||
f, err := i.os.FS().Open(p)
|
||||
return f, p, err
|
||||
},
|
||||
},
|
||||
{
|
||||
"", func(filename string) (io.ReadCloser, string, error) {
|
||||
if path.IsAbs(filename) {
|
||||
return i.os.FS().Open(filename)
|
||||
f, err := i.os.FS().Open(filename)
|
||||
return f, filename, err
|
||||
}
|
||||
|
||||
// TODO: jq $ORIGIN
|
||||
for _, includePath := range append([]string{"./"}, i.includePaths()...) {
|
||||
p := path.Join(includePath, filename)
|
||||
if f, err := i.os.FS().Open(path.Join(includePath, filename)); err == nil {
|
||||
return f, nil
|
||||
return f, p, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, &fs.PathError{Op: "open", Path: filename, Err: fs.ErrNotExist}
|
||||
return nil, "", &fs.PathError{Op: "open", Path: filename, Err: fs.ErrNotExist}
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, p := range resolvePaths {
|
||||
if strings.HasPrefix(filename, p.prefix) {
|
||||
return p, true
|
||||
return p, nil
|
||||
}
|
||||
}
|
||||
return pathResolver{}, false
|
||||
return pathResolver{}, fmt.Errorf("could not resolve path: %s", filename)
|
||||
}
|
||||
|
||||
func (i *Interp) Eval(ctx context.Context, c interface{}, src string, srcFilename string, output io.Writer) (gojq.Iter, error) {
|
||||
gq, err := gojq.Parse(src)
|
||||
type EvalOpts struct {
|
||||
filename string
|
||||
output io.Writer
|
||||
isCompleting bool
|
||||
}
|
||||
|
||||
func (i *Interp) Eval(ctx context.Context, c interface{}, expr string, opts EvalOpts) (gojq.Iter, error) {
|
||||
gq, err := gojq.Parse(expr)
|
||||
if err != nil {
|
||||
p := queryErrorPosition(src, err)
|
||||
p := queryErrorPosition(expr, err)
|
||||
return nil, compileError{
|
||||
err: err,
|
||||
what: "parse",
|
||||
filename: srcFilename,
|
||||
filename: opts.filename,
|
||||
pos: p,
|
||||
}
|
||||
}
|
||||
@ -701,7 +762,7 @@ func (i *Interp) Eval(ctx context.Context, c interface{}, src string, srcFilenam
|
||||
compilerOpts = append(compilerOpts, gojq.WithVariables(variableNames))
|
||||
compilerOpts = append(compilerOpts, gojq.WithModuleLoader(loadModule{
|
||||
init: func() ([]*gojq.Query, error) {
|
||||
return []*gojq.Query{i.initFqQuery}, nil
|
||||
return []*gojq.Query{i.initQuery}, nil
|
||||
},
|
||||
load: func(name string) (*gojq.Query, error) {
|
||||
if err := ctx.Err(); err != nil {
|
||||
@ -719,9 +780,9 @@ func (i *Interp) Eval(ctx context.Context, c interface{}, src string, srcFilenam
|
||||
}
|
||||
filename = filename + ".jq"
|
||||
|
||||
pr, ok := i.lookupPathResolver(filename)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("could not resolve path: %s", filename)
|
||||
pr, err := i.lookupPathResolver(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if q, ok := ni.includeCache[filename]; ok {
|
||||
@ -730,7 +791,7 @@ func (i *Interp) Eval(ctx context.Context, c interface{}, src string, srcFilenam
|
||||
|
||||
filenamePart := strings.TrimPrefix(filename, pr.prefix)
|
||||
|
||||
f, err := pr.open(filenamePart)
|
||||
f, absPath, err := pr.open(filenamePart)
|
||||
if err != nil {
|
||||
if !isTry {
|
||||
return nil, err
|
||||
@ -750,7 +811,7 @@ func (i *Interp) Eval(ctx context.Context, c interface{}, src string, srcFilenam
|
||||
return nil, compileError{
|
||||
err: err,
|
||||
what: "parse",
|
||||
filename: filenamePart,
|
||||
filename: absPath,
|
||||
pos: p,
|
||||
}
|
||||
}
|
||||
@ -819,18 +880,25 @@ func (i *Interp) Eval(ctx context.Context, c interface{}, src string, srcFilenam
|
||||
|
||||
gc, err := gojq.Compile(gq, compilerOpts...)
|
||||
if err != nil {
|
||||
p := queryErrorPosition(src, err)
|
||||
p := queryErrorPosition(expr, err)
|
||||
return nil, compileError{
|
||||
err: err,
|
||||
what: "compile",
|
||||
filename: srcFilename,
|
||||
filename: opts.filename,
|
||||
pos: p,
|
||||
}
|
||||
}
|
||||
|
||||
output := opts.output
|
||||
if opts.output == nil {
|
||||
output = ioutil.Discard
|
||||
}
|
||||
|
||||
runCtx, runCtxCancelFn := i.interruptStack.Push(ctx)
|
||||
ni.evalContext.ctx = runCtx
|
||||
ni.evalContext.output = ioextra.CtxWriter{Writer: output, Ctx: runCtx}
|
||||
// inherit or set
|
||||
ni.evalContext.isCompleting = i.evalContext.isCompleting || opts.isCompleting
|
||||
iter := gc.RunWithContext(runCtx, c, variableValues...)
|
||||
|
||||
iterWrapper := iterFn(func() (interface{}, bool) {
|
||||
@ -845,7 +913,7 @@ func (i *Interp) Eval(ctx context.Context, c interface{}, src string, srcFilenam
|
||||
return iterWrapper, nil
|
||||
}
|
||||
|
||||
func (i *Interp) EvalFunc(ctx context.Context, c interface{}, name string, args []interface{}, output io.Writer) (gojq.Iter, error) {
|
||||
func (i *Interp) EvalFunc(ctx context.Context, c interface{}, name string, args []interface{}, opts EvalOpts) (gojq.Iter, error) {
|
||||
var argsExpr []string
|
||||
for i := range args {
|
||||
argsExpr = append(argsExpr, fmt.Sprintf("$_args[%d]", i))
|
||||
@ -862,15 +930,15 @@ func (i *Interp) EvalFunc(ctx context.Context, c interface{}, name string, args
|
||||
// _args to mark variable as internal and hide it from completion
|
||||
// {input: ..., args: [...]} | .args as {args: $_args} | .input | name[($_args[0]; ...)]
|
||||
trampolineExpr := fmt.Sprintf(". as {args: $_args} | .input | %s%s", name, argExpr)
|
||||
iter, err := i.Eval(ctx, trampolineInput, trampolineExpr, "", output)
|
||||
iter, err := i.Eval(ctx, trampolineInput, trampolineExpr, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return iter, nil
|
||||
}
|
||||
|
||||
func (i *Interp) EvalFuncValues(ctx context.Context, c interface{}, name string, args []interface{}, output io.Writer) ([]interface{}, error) {
|
||||
iter, err := i.EvalFunc(ctx, c, name, args, output)
|
||||
func (i *Interp) EvalFuncValues(ctx context.Context, c interface{}, name string, args []interface{}, opts EvalOpts) ([]interface{}, error) {
|
||||
iter, err := i.EvalFunc(ctx, c, name, args, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
include "internal";
|
||||
include "options";
|
||||
include "buffer";
|
||||
include "binary";
|
||||
include "decode";
|
||||
include "match";
|
||||
include "funcs";
|
||||
|
@ -15,15 +15,15 @@ import (
|
||||
func init() {
|
||||
functionRegisterFns = append(functionRegisterFns, func(i *Interp) []Function {
|
||||
return []Function{
|
||||
{"_match_buffer", 1, 2, nil, i._bufferMatch},
|
||||
{"_match_binary", 1, 2, nil, i._binaryMatch},
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func (i *Interp) _bufferMatch(c interface{}, a []interface{}) gojq.Iter {
|
||||
func (i *Interp) _binaryMatch(c interface{}, a []interface{}) gojq.Iter {
|
||||
var ok bool
|
||||
|
||||
bv, err := toBuffer(c)
|
||||
bv, err := toBinary(c)
|
||||
if err != nil {
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
@ -70,7 +70,7 @@ func (i *Interp) _bufferMatch(c interface{}, a []interface{}) gojq.Iter {
|
||||
}
|
||||
sreNames := sre.SubexpNames()
|
||||
|
||||
br, err := bv.toBuffer()
|
||||
br, err := bv.toReader()
|
||||
if err != nil {
|
||||
return gojq.NewIter(err)
|
||||
}
|
||||
@ -92,7 +92,7 @@ func (i *Interp) _bufferMatch(c interface{}, a []interface{}) gojq.Iter {
|
||||
var off int64
|
||||
prevOff := int64(-1)
|
||||
return iterFn(func() (interface{}, bool) {
|
||||
// TODO: correct way to handle empty match for buffer, move one byte forward?
|
||||
// TODO: correct way to handle empty match for binary, move one byte forward?
|
||||
// > "asdasd" | [match(""; "g")], [(tobytes | match(""; "g"))] | length
|
||||
// 7
|
||||
// 1
|
||||
@ -127,7 +127,7 @@ func (i *Interp) _bufferMatch(c interface{}, a []interface{}) gojq.Iter {
|
||||
if start != -1 {
|
||||
matchBitOff := (off + int64(start)) * 8
|
||||
matchLength := int64(end-start) * 8
|
||||
bbo := Buffer{
|
||||
bbo := Binary{
|
||||
br: bv.br,
|
||||
r: ranges.Range{
|
||||
Start: bv.r.Start + matchBitOff,
|
||||
|
@ -1,10 +1,10 @@
|
||||
def _buffer_fn(f):
|
||||
def _binary_fn(f):
|
||||
( . as $c
|
||||
| tobytesrange
|
||||
| f
|
||||
);
|
||||
|
||||
def _buffer_try_orig(bfn; fn):
|
||||
def _binary_try_orig(bfn; fn):
|
||||
( . as $c
|
||||
| if type == "string" then fn
|
||||
else
|
||||
@ -15,27 +15,27 @@ def _buffer_try_orig(bfn; fn):
|
||||
end
|
||||
);
|
||||
|
||||
# overloads to support buffer
|
||||
# overloads to support binary
|
||||
|
||||
def _orig_test($val): test($val);
|
||||
def _orig_test($regex; $flags): test($regex; $flags);
|
||||
def _test_buffer($regex; $flags):
|
||||
( isempty(_match_buffer($regex; $flags))
|
||||
def _test_binary($regex; $flags):
|
||||
( isempty(_match_binary($regex; $flags))
|
||||
| not
|
||||
);
|
||||
def test($val): _buffer_try_orig(_test_buffer($val; ""); _orig_test($val));
|
||||
def test($regex; $flags): _buffer_try_orig(_test_buffer($regex; $flags); _orig_test($regex; $flags));
|
||||
def test($val): _binary_try_orig(_test_binary($val; ""); _orig_test($val));
|
||||
def test($regex; $flags): _binary_try_orig(_test_binary($regex; $flags); _orig_test($regex; $flags));
|
||||
|
||||
def _orig_match($val): match($val);
|
||||
def _orig_match($regex; $flags): match($regex; $flags);
|
||||
def match($val): _buffer_try_orig(_match_buffer($val); _orig_match($val));
|
||||
def match($regex; $flags): _buffer_try_orig(_match_buffer($regex; $flags); _orig_match($regex; $flags));
|
||||
def match($val): _binary_try_orig(_match_binary($val); _orig_match($val));
|
||||
def match($regex; $flags): _binary_try_orig(_match_binary($regex; $flags); _orig_match($regex; $flags));
|
||||
|
||||
def _orig_capture($val): capture($val);
|
||||
def _orig_capture($regex; $flags): capture($regex; $flags);
|
||||
def _capture_buffer($regex; $flags):
|
||||
def _capture_binary($regex; $flags):
|
||||
( . as $b
|
||||
| _match_buffer($regex; $flags)
|
||||
| _match_binary($regex; $flags)
|
||||
| .captures
|
||||
| map(
|
||||
( select(.name)
|
||||
@ -44,25 +44,25 @@ def _capture_buffer($regex; $flags):
|
||||
)
|
||||
| from_entries
|
||||
);
|
||||
def capture($val): _buffer_try_orig(_capture_buffer($val; ""); _orig_capture($val));
|
||||
def capture($regex; $flags): _buffer_try_orig(_capture_buffer($regex; $flags); _orig_capture($regex; $flags));
|
||||
def capture($val): _binary_try_orig(_capture_binary($val; ""); _orig_capture($val));
|
||||
def capture($regex; $flags): _binary_try_orig(_capture_binary($regex; $flags); _orig_capture($regex; $flags));
|
||||
|
||||
def _orig_scan($val): scan($val);
|
||||
def _orig_scan($regex; $flags): scan($regex; $flags);
|
||||
def _scan_buffer($regex; $flags):
|
||||
def _scan_binary($regex; $flags):
|
||||
( . as $b
|
||||
| _match_buffer($regex; $flags)
|
||||
| _match_binary($regex; $flags)
|
||||
| $b[.offset:.offset+.length]
|
||||
);
|
||||
def scan($val): _buffer_try_orig(_scan_buffer($val; "g"); _orig_scan($val));
|
||||
def scan($regex; $flags): _buffer_try_orig(_scan_buffer($regex; "g"+$flags); _orig_scan($regex; $flags));
|
||||
def scan($val): _binary_try_orig(_scan_binary($val; "g"); _orig_scan($val));
|
||||
def scan($regex; $flags): _binary_try_orig(_scan_binary($regex; "g"+$flags); _orig_scan($regex; $flags));
|
||||
|
||||
def _orig_splits($val): splits($val);
|
||||
def _orig_splits($regex; $flags): splits($regex; $flags);
|
||||
def _splits_buffer($regex; $flags):
|
||||
def _splits_binary($regex; $flags):
|
||||
( . as $b
|
||||
# last null output is to do a last iteration that output from end of last match to end of buffer
|
||||
| foreach (_match_buffer($regex; $flags), null) as $m (
|
||||
# last null output is to do a last iteration that output from end of last match to end of binary
|
||||
| foreach (_match_binary($regex; $flags), null) as $m (
|
||||
{prev: null, curr: null};
|
||||
( .prev = .curr
|
||||
| .curr = $m
|
||||
@ -73,8 +73,8 @@ def _splits_buffer($regex; $flags):
|
||||
end
|
||||
)
|
||||
);
|
||||
def splits($val): _buffer_try_orig(_splits_buffer($val; "g"); _orig_splits($val));
|
||||
def splits($regex; $flags): _buffer_try_orig(_splits_buffer($regex; "g"+$flags); _orig_splits($regex; $flags));
|
||||
def splits($val): _binary_try_orig(_splits_binary($val; "g"); _orig_splits($val));
|
||||
def splits($regex; $flags): _binary_try_orig(_splits_binary($regex; "g"+$flags); _orig_splits($regex; $flags));
|
||||
|
||||
# same as regexp.QuoteMeta
|
||||
def _quote_meta:
|
||||
@ -87,11 +87,11 @@ def split($val): [splits($val | _quote_meta)];
|
||||
def split($regex; $flags): [splits($regex; $flags)];
|
||||
|
||||
# TODO: rename
|
||||
# same as scan but outputs buffer from start of match to end of buffer
|
||||
# same as scan but outputs binary from start of match to end of binary
|
||||
def _scan_toend($regex; $flags):
|
||||
( . as $b
|
||||
| _match_buffer($regex; $flags)
|
||||
| _match_binary($regex; $flags)
|
||||
| $b[.offset:]
|
||||
);
|
||||
def scan_toend($val): _buffer_fn(_scan_toend($val; "g"));
|
||||
def scan_toend($regex; $flags): _buffer_fn(_scan_toend($regex; "g"+$flags));
|
||||
def scan_toend($val): _binary_fn(_scan_toend($val; "g"));
|
||||
def scan_toend($regex; $flags): _binary_fn(_scan_toend($regex; "g"+$flags));
|
||||
|
@ -28,6 +28,8 @@ def _complete_keywords:
|
||||
|
||||
def _complete_scope:
|
||||
[scope[], _complete_keywords[]];
|
||||
def _complete_keys:
|
||||
[keys[]?, _extkeys[]?];
|
||||
|
||||
# TODO: handle variables via ast walk?
|
||||
# TODO: refactor this
|
||||
@ -55,10 +57,7 @@ def _complete($line; $cursor_pos):
|
||||
# TODO: move map/add logic to here?
|
||||
| _query_completion(
|
||||
if .type | . == "func" or . == "var" then "_complete_scope"
|
||||
elif .type == "index" then
|
||||
if (.prefix | startswith("_")) then "_extkeys"
|
||||
else "keys"
|
||||
end
|
||||
elif .type == "index" then "_complete_keys"
|
||||
else error("unreachable")
|
||||
end
|
||||
) as {$type, $query, $prefix}
|
||||
@ -79,7 +78,7 @@ def _complete($line; $cursor_pos):
|
||||
strings and
|
||||
# TODO: var type really needed? just func?
|
||||
(_is_ident or $type == "var") and
|
||||
((_is_internal | not) or $prefix_is_internal or $type == "index") and
|
||||
((_is_internal | not) or $prefix_is_internal) and
|
||||
startswith($prefix)
|
||||
)
|
||||
)
|
||||
@ -182,7 +181,7 @@ def _repl($opts): #:: a|(Opts) => @
|
||||
def _read_expr:
|
||||
_repeat_break(
|
||||
# both _prompt and _complete want input arrays
|
||||
( _readline(_prompt; {complete: "_complete", timeout: 1})
|
||||
( _readline({prompt: _prompt, complete: "_complete", timeout: 1})
|
||||
| if trim == "" then empty
|
||||
else (., error("break"))
|
||||
end
|
||||
@ -216,12 +215,15 @@ def _repl($opts): #:: a|(Opts) => @
|
||||
else error
|
||||
end
|
||||
);
|
||||
( _options_stack(. + [$opts]) as $_
|
||||
| _finally(
|
||||
_repeat_break(_repl_loop);
|
||||
_options_stack(.[:-1])
|
||||
if _is_completing | not then
|
||||
( _options_stack(. + [$opts]) as $_
|
||||
| _finally(
|
||||
_repeat_break(_repl_loop);
|
||||
_options_stack(.[:-1])
|
||||
)
|
||||
)
|
||||
);
|
||||
else empty
|
||||
end;
|
||||
|
||||
def _repl_slurp($opts): _repl($opts);
|
||||
def _repl_slurp: _repl({});
|
||||
@ -229,7 +231,7 @@ def _repl_slurp: _repl({});
|
||||
# TODO: introspect and show doc, reflection somehow?
|
||||
def help:
|
||||
( "Type expression to evaluate"
|
||||
, "\\t Auto completion"
|
||||
, "\\t Completion"
|
||||
, "Up/Down History"
|
||||
, "^C Interrupt execution"
|
||||
, "... | repl Start a new REPL"
|
||||
|
92
pkg/interp/testdata/buffer.fqtest
vendored
92
pkg/interp/testdata/buffer.fqtest
vendored
@ -51,9 +51,9 @@ mp3> [1, 2, 3, [1, 2, 3], .headers[0].magic] | tobytes
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|01 02 03 01 02 03 49 44 33| |......ID3| |.: raw bits 0x0-0x8.7 (9)
|
||||
mp3> [-1] | tobytes
|
||||
error: buffer byte list must be bytes (0-255) got -1
|
||||
error: byte in binary list must be bytes (0-255) got -1
|
||||
mp3> [256] | tobytes
|
||||
error: buffer byte list must be bytes (0-255) got 256
|
||||
error: byte in binary list must be bytes (0-255) got 256
|
||||
mp3> ^D
|
||||
$ fq -d mp3 -i . /test.mp3
|
||||
mp3> .frames[1] | tobits | ., .start, .stop, .size, .[4:17], (tobits, tobytes, tobitsrange, tobytesrange | ., .start, .stop, .size, .[4:17])
|
||||
@ -256,41 +256,59 @@ mp3> "fq" | tobits | chunk(range(17)+1) | tobytes | tostring
|
||||
"fq"
|
||||
"fq"
|
||||
"fq"
|
||||
mp3> range(17) | [range(.) | 1 | tobits] | tobytes
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
| | |.: raw bits 0x0-NA (0)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|80| |.| |.: raw bits 0x0-0x0 (0.1)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|c0| |.| |.: raw bits 0x0-0x0.1 (0.2)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|e0| |.| |.: raw bits 0x0-0x0.2 (0.3)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|f0| |.| |.: raw bits 0x0-0x0.3 (0.4)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|f8| |.| |.: raw bits 0x0-0x0.4 (0.5)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|fc| |.| |.: raw bits 0x0-0x0.5 (0.6)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|fe| |.| |.: raw bits 0x0-0x0.6 (0.7)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|ff| |.| |.: raw bits 0x0-0x0.7 (1)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|ff 80| |..| |.: raw bits 0x0-0x1 (1.1)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|ff c0| |..| |.: raw bits 0x0-0x1.1 (1.2)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|ff e0| |..| |.: raw bits 0x0-0x1.2 (1.3)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|ff f0| |..| |.: raw bits 0x0-0x1.3 (1.4)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|ff f8| |..| |.: raw bits 0x0-0x1.4 (1.5)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|ff fc| |..| |.: raw bits 0x0-0x1.5 (1.6)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|ff fe| |..| |.: raw bits 0x0-0x1.6 (1.7)
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|ff ff| |..| |.: raw bits 0x0-0x1.7 (2)
|
||||
mp3> 1 | tobits(range(10)) | hex
|
||||
"80"
|
||||
"80"
|
||||
"40"
|
||||
"20"
|
||||
"10"
|
||||
"08"
|
||||
"04"
|
||||
"02"
|
||||
"01"
|
||||
"0080"
|
||||
mp3> 1 | tobytes(range(5)) | hex
|
||||
"01"
|
||||
"01"
|
||||
"0001"
|
||||
"000001"
|
||||
"00000001"
|
||||
mp3> range(17) | [range(.) | 1 | tobits] | tobits | hex
|
||||
""
|
||||
"80"
|
||||
"c0"
|
||||
"e0"
|
||||
"f0"
|
||||
"f8"
|
||||
"fc"
|
||||
"fe"
|
||||
"ff"
|
||||
"ff80"
|
||||
"ffc0"
|
||||
"ffe0"
|
||||
"fff0"
|
||||
"fff8"
|
||||
"fffc"
|
||||
"fffe"
|
||||
"ffff"
|
||||
mp3> range(17) | [range(.) | 1 | tobits] | tobytes | hex
|
||||
""
|
||||
"01"
|
||||
"03"
|
||||
"07"
|
||||
"0f"
|
||||
"1f"
|
||||
"3f"
|
||||
"7f"
|
||||
"ff"
|
||||
"01ff"
|
||||
"03ff"
|
||||
"07ff"
|
||||
"0fff"
|
||||
"1fff"
|
||||
"3fff"
|
||||
"7fff"
|
||||
"ffff"
|
||||
mp3> "c9dfdac2f6ef68e5db666b6fbeee66d9c7deda66bebfbfe860bfbfbfe9d1636bbfbebf" | hex | tobits | reduce chunk(8)[] as $c ({h:[],g:[]}; .h += [(0|tobits), $c[0:7]] | .g |= . + [if length % 8 == 0 then (0|tobits) else empty end, $c[7:8]]) | .h, .g | tobytes
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x00|64 6f 6d 61 7b 77 34 72 6d 33 35 37 5f 77 33 6c|doma{w4rm357_w3l|.: raw bits 0x0-0x22.7 (35)
|
||||
|
8
pkg/interp/testdata/completion.fqtest
vendored
8
pkg/interp/testdata/completion.fqtest
vendored
@ -56,4 +56,12 @@ mp3> .frames\t
|
||||
frames[]
|
||||
mp3> .frames[]\t
|
||||
.
|
||||
mp3> "abc" | tobitsrange.s\t
|
||||
size
|
||||
start
|
||||
stop
|
||||
mp3> options.c\t
|
||||
color
|
||||
colors
|
||||
compact
|
||||
mp3> ^D
|
||||
|
21
pkg/interp/testdata/funcs.fqtest
vendored
21
pkg/interp/testdata/funcs.fqtest
vendored
@ -5,4 +5,25 @@ null> "abc" | topem | "before" + . + "between" + . + "after" | frompem | tostrin
|
||||
"abc"
|
||||
"abc"
|
||||
null>
|
||||
null> (0,1,1024,99999999999999999999) as $n | (2,8,16,62,64) as $r | "\($r): \($n) \($n | toradix($r)) \($n | toradix($r) | fromradix($r))" | println
|
||||
2: 0 0 0
|
||||
8: 0 0 0
|
||||
16: 0 0 0
|
||||
62: 0 0 0
|
||||
64: 0 0 0
|
||||
2: 1 1 1
|
||||
8: 1 1 1
|
||||
16: 1 1 1
|
||||
62: 1 1 1
|
||||
64: 1 1 1
|
||||
2: 1024 10000000000 1024
|
||||
8: 1024 2000 1024
|
||||
16: 1024 400 1024
|
||||
62: 1024 gw 1024
|
||||
64: 1024 g0 1024
|
||||
2: 99999999999999999999 1010110101111000111010111100010110101100011000011111111111111111111 99999999999999999999
|
||||
8: 99999999999999999999 12657072742654303777777 99999999999999999999
|
||||
16: 99999999999999999999 56bc75e2d630fffff 99999999999999999999
|
||||
62: 99999999999999999999 1V973MbJYWoT 99999999999999999999
|
||||
64: 99999999999999999999 1mL7nyRz3___ 99999999999999999999
|
||||
null> ^D
|
||||
|
5
pkg/interp/testdata/gojq.fqtest
vendored
5
pkg/interp/testdata/gojq.fqtest
vendored
@ -1,4 +1,9 @@
|
||||
# TODO: various gojq fq fork regression tests, should probably be move to fork code instead
|
||||
# 0xf_ffff_ffff_fffff_fffff-1 | toradix(2,8,16)
|
||||
$ fq -n '0b1111111111111111111111111111111111111111111111111111111111111111111111111110, 0o17777777777777777777777776, 0xffffffffffffffffffe'
|
||||
75557863725914323419134
|
||||
75557863725914323419134
|
||||
75557863725914323419134
|
||||
$ fq -n '[true] | all'
|
||||
true
|
||||
$ fq -n '{a:1, b: 2} | tostream'
|
||||
|
12
pkg/interp/testdata/incudepath.fqtest
vendored
12
pkg/interp/testdata/incudepath.fqtest
vendored
@ -1,5 +1,7 @@
|
||||
/library/a.jq:
|
||||
def a: "a";
|
||||
/config/has_error.jq:
|
||||
)
|
||||
$ fq -L /library -n 'include "a"; a'
|
||||
"a"
|
||||
$ fq --include-path /library -n 'include "a"; a'
|
||||
@ -12,3 +14,13 @@ $ fq -L /wrong -n 'include "a"; a'
|
||||
exitcode: 3
|
||||
stderr:
|
||||
error: arg:1:0: open a.jq: file does not exist
|
||||
$ fq -n 'include "@config/a";'
|
||||
exitcode: 3
|
||||
stderr:
|
||||
error: arg:1:0: open testdata/config/a.jq: no such file or directory
|
||||
$ fq -n 'include "@config/missing?";'
|
||||
null
|
||||
$ fq -n 'include "@config/has_error?";'
|
||||
exitcode: 3
|
||||
stderr:
|
||||
error: arg:1:0: /config/has_error.jq:1:1: parse: unexpected token ")"
|
||||
|
6
pkg/interp/testdata/paste.fqtest
vendored
Normal file
6
pkg/interp/testdata/paste.fqtest
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
$ fq -i
|
||||
null> paste
|
||||
"test\n"
|
||||
null> ^D
|
||||
stdin:
|
||||
test
|
4
pkg/interp/testdata/value_array.fqtest
vendored
4
pkg/interp/testdata/value_array.fqtest
vendored
@ -166,14 +166,14 @@ mp3> .headers._bits | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x00|49 44 33 04 00 00 00 00 00 23 54 53 53 45 00 00|ID3......#TSSE..|.: raw bits 0x0-0x2c.7 (45)
|
||||
* |until 0x2c.7 (45) | |
|
||||
"buffer"
|
||||
"binary"
|
||||
360
|
||||
mp3>
|
||||
mp3> .headers._bytes | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x00|49 44 33 04 00 00 00 00 00 23 54 53 53 45 00 00|ID3......#TSSE..|.: raw bits 0x0-0x2c.7 (45)
|
||||
* |until 0x2c.7 (45) | |
|
||||
"buffer"
|
||||
"binary"
|
||||
45
|
||||
mp3>
|
||||
mp3> .headers._error | ., type, length?
|
||||
|
4
pkg/interp/testdata/value_boolean.fqtest
vendored
4
pkg/interp/testdata/value_boolean.fqtest
vendored
@ -72,13 +72,13 @@ mp3> .headers[0].flags.unsynchronisation._path | ., type, length?
|
||||
mp3> .headers[0].flags.unsynchronisation._bits | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0| 00 | . |.: raw bits 0x5-0x5 (0.1)
|
||||
"buffer"
|
||||
"binary"
|
||||
1
|
||||
mp3>
|
||||
mp3> .headers[0].flags.unsynchronisation._bytes | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0| 00 | . |.: raw bits 0x5-0x5 (0.1)
|
||||
"buffer"
|
||||
"binary"
|
||||
0
|
||||
mp3>
|
||||
mp3> .headers[0].flags.unsynchronisation._error | ., type, length?
|
||||
|
4
pkg/interp/testdata/value_json_array.fqtest
vendored
4
pkg/interp/testdata/value_json_array.fqtest
vendored
@ -81,13 +81,13 @@ json> (.)._path | ., type, length?
|
||||
json> (.)._bits | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|5b 5d| |[]| |.: raw bits 0x0-0x1.7 (2)
|
||||
"buffer"
|
||||
"binary"
|
||||
16
|
||||
json>
|
||||
json> (.)._bytes | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|5b 5d| |[]| |.: raw bits 0x0-0x1.7 (2)
|
||||
"buffer"
|
||||
"binary"
|
||||
2
|
||||
json>
|
||||
json> (.)._error | ., type, length?
|
||||
|
4
pkg/interp/testdata/value_json_object.fqtest
vendored
4
pkg/interp/testdata/value_json_object.fqtest
vendored
@ -71,13 +71,13 @@ json> (.)._path | ., type, length?
|
||||
json> (.)._bits | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|7b 7d| |{}| |.: raw bits 0x0-0x1.7 (2)
|
||||
"buffer"
|
||||
"binary"
|
||||
16
|
||||
json>
|
||||
json> (.)._bytes | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|7b 7d| |{}| |.: raw bits 0x0-0x1.7 (2)
|
||||
"buffer"
|
||||
"binary"
|
||||
2
|
||||
json>
|
||||
json> (.)._error | ., type, length?
|
||||
|
4
pkg/interp/testdata/value_null.fqtest
vendored
4
pkg/interp/testdata/value_null.fqtest
vendored
@ -84,13 +84,13 @@ mp3> .headers[0].padding._path | ., type, length?
|
||||
mp3> .headers[0].padding._bits | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x20| 00 00 00 00 00 00 00 00 00 00 | .......... |.: raw bits 0x23-0x2c.7 (10)
|
||||
"buffer"
|
||||
"binary"
|
||||
80
|
||||
mp3>
|
||||
mp3> .headers[0].padding._bytes | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x20| 00 00 00 00 00 00 00 00 00 00 | .......... |.: raw bits 0x23-0x2c.7 (10)
|
||||
"buffer"
|
||||
"binary"
|
||||
10
|
||||
mp3>
|
||||
mp3> .headers[0].padding._error | ., type, length?
|
||||
|
4
pkg/interp/testdata/value_number.fqtest
vendored
4
pkg/interp/testdata/value_number.fqtest
vendored
@ -72,13 +72,13 @@ mp3> .headers[0].version._path | ., type, length?
|
||||
mp3> .headers[0].version._bits | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0| 04 | . |.: raw bits 0x3-0x3.7 (1)
|
||||
"buffer"
|
||||
"binary"
|
||||
8
|
||||
mp3>
|
||||
mp3> .headers[0].version._bytes | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0| 04 | . |.: raw bits 0x3-0x3.7 (1)
|
||||
"buffer"
|
||||
"binary"
|
||||
1
|
||||
mp3>
|
||||
mp3> .headers[0].version._error | ., type, length?
|
||||
|
4
pkg/interp/testdata/value_object.fqtest
vendored
4
pkg/interp/testdata/value_object.fqtest
vendored
@ -88,13 +88,13 @@ mp3> .headers[0].flags._path | ., type, length?
|
||||
mp3> .headers[0].flags._bits | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0| 00 | . |.: raw bits 0x5-0x5.7 (1)
|
||||
"buffer"
|
||||
"binary"
|
||||
8
|
||||
mp3>
|
||||
mp3> .headers[0].flags._bytes | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0| 00 | . |.: raw bits 0x5-0x5.7 (1)
|
||||
"buffer"
|
||||
"binary"
|
||||
1
|
||||
mp3>
|
||||
mp3> .headers[0].flags._error | ., type, length?
|
||||
|
4
pkg/interp/testdata/value_string.fqtest
vendored
4
pkg/interp/testdata/value_string.fqtest
vendored
@ -84,13 +84,13 @@ mp3> .headers[0].magic._path | ., type, length?
|
||||
mp3> .headers[0].magic._bits | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|49 44 33 |ID3 |.: raw bits 0x0-0x2.7 (3)
|
||||
"buffer"
|
||||
"binary"
|
||||
24
|
||||
mp3>
|
||||
mp3> .headers[0].magic._bytes | ., type, length?
|
||||
|00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f|0123456789abcdef|
|
||||
0x0|49 44 33 |ID3 |.: raw bits 0x0-0x2.7 (3)
|
||||
"buffer"
|
||||
"binary"
|
||||
3
|
||||
mp3>
|
||||
mp3> .headers[0].magic._error | ., type, length?
|
||||
|
@ -42,7 +42,7 @@ func (df DisplayFormat) FormatBase() int {
|
||||
}
|
||||
|
||||
type S struct {
|
||||
Actual interface{} // int, int64, uint64, float64, string, bool, []byte, bitio.BitReaderAtSeeker,
|
||||
Actual interface{} // nil, int, int64, uint64, float64, string, bool, []byte, *bit.Int, bitio.BitReaderAtSeeker,
|
||||
ActualDisplay DisplayFormat
|
||||
Sym interface{}
|
||||
SymDisplay DisplayFormat
|
||||
|
Loading…
Reference in New Issue
Block a user