Merge commit 'c8337088cbaa00c37ba90c80d4e46d50b3c55f35' into clay-permissions

This commit is contained in:
Fang 2018-02-13 19:44:19 +01:00
commit 32f98da12f
14 changed files with 3104 additions and 11 deletions

View File

@ -1931,7 +1931,7 @@
%^ circle-wire nom
~[%grams %config-l %group-l]
[cir ran]
[0 %pull wir [hos.cir dap.bol] ~]
[ost.bol %pull wir [hos.cir dap.bol] ~]
::
++ sa-eject
:> removes ships {sis} from {followers}.
@ -1996,7 +1996,7 @@
|= wir/wire
^- move
=+ tar=(wire-to-target wir)
[0 %peer wir [p.tar dap.bol] q.tar]
[ost.bol %peer wir [p.tar dap.bol] q.tar]
::
++ wire-to-target
:> ship+path from wire
@ -2091,12 +2091,12 @@
%+ turn ~(tap in ~(key by stories))
|= n/name
^- (list move)
:~ :^ 0 %poke /
:~ :^ ost.bol %poke /
:+ [our.bol dap.bol] %hall-action
:^ %source n |
[[[our.bol nom.qer] ran.qer] ~ ~]
::
:^ 0 %pull
:^ ost.bol %pull
%^ circle-wire n ~(tap in wat.qer)
[[our.bol nom.qer] ran.qer]
[[our.bol dap.bol] ~]
@ -2847,6 +2847,15 @@
lent=(lent grams.s)
known=k
mismatch=m
?: =(a 'check subs')
~& 'here are all incoming non-circle subs'
~& ^- (list (pair ship path))
%+ murn ~(tap by sup.bol)
|= {b/bone s/ship p/path}
^- (unit (pair ship path))
?: ?=({$circle *} p) ~
`[s p]
[~ +>]
?: =(a 'rebuild')
~& 'rebuilding message references...'
=- [~ +>.$(stories -)]
@ -2867,7 +2876,7 @@
:_ +>
=+ bov=(above our.bol)
?: =(bov our.bol) ~
:~ [0 %pull /burden [bov dap.bol] ~]
:~ [ost.bol %pull /burden [bov dap.bol] ~]
(wire-to-peer /burden)
==
?: =(a 'incoming')
@ -2882,5 +2891,10 @@
|= {n/name s/story}
[n src.shape.s]
[~ +>]
?: =(`0 (find "re-listen " (trip a)))
~& 're-listening'
:_ +>
:_ ~
(wire-to-peer /report/(crip (slag 10 (trip a))))
[~ +>]
--

View File

@ -203,7 +203,7 @@
++ peer-client
:> ui state peer move
^- move
:* 0
:* ost.bol
%peer
/server/client
server
@ -212,7 +212,7 @@
::
++ peer-inbox
^- move
:* 0
:* ost.bol
%peer
/server/inbox
server
@ -2438,16 +2438,16 @@
?: =(a 'reconnect')
~& 'disconnecting and reconnecting to hall...'
:_ +>
:~ [0 %pull /server/client server ~]
[0 %pull /server/inbox server ~]
:~ [ost.bol %pull /server/client server ~]
[ost.bol %pull /server/inbox server ~]
peer-client
peer-inbox
==
?: =(a 'reset')
~& 'full reset incoming, hold on to your cli...'
:_ +>(grams ~, known ~, count 0)
:~ [0 %pull /server/client server ~]
[0 %pull /server/inbox server ~]
:~ [ost.bol %pull /server/client server ~]
[ost.bol %pull /server/inbox server ~]
peer-client
peer-inbox
==

285
gen/capitalize.hoon Normal file
View File

@ -0,0 +1,285 @@
:: to use, download UnicdoeData.txt and place it in `%/lib/unicode-data/txt`.
::
::::
::
:: part 1: parse the file into {uppers}
::
/- unicode-data
/+ new-hoon
/= case-table
/; |= a=(list line:unicode-data)
=, new-hoon
|^ %- build-tree
%- flop
(build-case-nodes a)
::
:> #
:> # %case-nodes
:> #
:> transforms raw unicode data into sequential case nodes.
+|
++ build-case-nodes
:> raw list of unicode data lines to a compact list of chardata
|= a=(list line:unicode-data)
^- (list case-node:unicode-data)
=< out
::
:: todo: we don't have the final case range in the output of this
:: gate. this is because this algorithm doesn't work when the last
:: char is part of a range. this doesn't happen with the real one,
:: only the excerpts i was using for testing.
::
%^ foldl:ls a *case-fold
|= [c=case-fold l=line:unicode-data]
^+ c
=+ state=(line-to-case-state l)
?: (is-adjacent state prev.c)
c(prev state)
=. c (add-range c)
%= c
start
?: &(!=(case.state %missing) !=(case.state %none))
`state
~
prev state
==
::
++ line-to-case-state
:> creates an easy to merge form.
|= line:unicode-data
^- case-state
=/ out=case-state
[code %none [%none ~] [%none ~] [%none ~]]
?: =(code `@c`0)
=. case.out %missing
out
=. case.out
?+ gen %none
$lu %upper
$ll %lower
$lt %title
==
::
:: several characters aren't described as $lu or $ll but have lower or
:: upper state, such as u+2161. detect this and fix it up.
::
=? case.out &(=(case.out %none) !=(low ~)) %upper
=? case.out &(=(case.out %none) !=(up ~)) %lower
::
:: calculate offsets
::
=? upper.out !=(up ~) (calculate-offset (need up) code)
=? lower.out !=(low ~)
(calculate-offset (need low) code)
=? title.out !=(title ~) (calculate-offset (need title) code)
out
::
++ calculate-offset
|= [src=@c dst=@c]
^- case-offset:unicode-data
?: =(src dst)
[%none ~]
?: (gth src dst)
[%add (sub src dst)]
[%sub (sub dst src)]
::
++ is-adjacent
:> is {rhs} a continuation of {lhs}?
|= [lhs=case-state rhs=case-state]
^- ?
?: (lth point.rhs point.lhs)
$(lhs rhs, rhs lhs)
?: !=(point.rhs +(point.lhs))
%.n
?: !=(case.rhs case.lhs)
(upper-lower-adjacent lhs rhs)
?: =(case.lhs %none)
%.n
?: =(case.lhs %missing)
%.n
?: !=(upper.lhs upper.rhs)
%.n
?: !=(lower.lhs lower.rhs)
%.n
?: !=(title.lhs title.rhs)
%.n
%.y
::
++ upper-lower-adjacent
:> detects %upper-lower spans.
:>
:> is {lhs} the same as {rhs}, but with opposite case?
|= [lhs=case-state rhs=case-state]
?: &(=(case.lhs %upper) !=(case.rhs %lower))
%.n
?: &(=(case.lhs %lower) !=(case.rhs %upper))
%.n
::
:: to simplify detection, if things are in the opposite order, redo
:: things flipped.
::
?: =(case.lhs %lower)
$(lhs rhs, rhs lhs)
?& (is-upper-lower lhs)
(is-lower-upper rhs)
==
::
++ is-upper-lower
|= i=case-state
=(+.+.i [[%none ~] [%add 1] [%none ~]])
::
++ is-lower-upper
|= i=case-state
=(+.+.i [[%sub 1] [%none ~] [%sub 1]])
::
++ is-none
|= i=case-state
=(+.+.i [[%none ~] [%none ~] [%none ~]])
::
++ add-range
|= c=case-fold
^+ c
?~ start.c
c
?: (is-none u.start.c)
c
?: ?& (gth point.prev.c point.u.start.c)
(is-upper-lower u.start.c)
==
=/ node=case-node:unicode-data
[`@ux`point.u.start.c `@ux`point.prev.c [%uplo ~] [%uplo ~] [%uplo ~]]
c(out [node out.c])
=/ node=case-node:unicode-data
[`@ux`point.u.start.c `@ux`point.prev.c +.+.u.start.c]
c(out [node out.c])
::
++ case-fold
:> state that's part of the fold which generates the list of case-nodes
$: :> resulting data to pass to treeify.
out=(list case-node:unicode-data)
:> the start of a run of characters; ~ for not active.
start=(unit case-state)
:> previous character state
prev=case-state
==
::
++ case-state
:> a temporary model which we compress later in a second pass.
$: point=@c
case=case-class
upper=case-offset:unicode-data
lower=case-offset:unicode-data
title=case-offset:unicode-data
==
::
++ case-class
:> classification of an individual character.
$? $upper
$lower
$title
$none
$missing
==
::
:> #
:> # %tree-building
:> #
:> builds a binary search tree out of the list
+|
++ build-tree
|= a=(list case-node:unicode-data)
^- case-tree:unicode-data
:: there's probably a bottom up approach that doesn't require walking
:: a list over and over again.
?~ a
~
=+ len=(lent a)
=+ [lhs rhs]=(split-at:ls (div len 2) a)
?~ rhs
?~ lhs
~
[i.lhs ~ ~]
=+ x=[i.rhs $(a lhs) $(a t.rhs)]
x
--
/: /===/lib/unicode-data /&unicode-data&/txt/
::
:: part 2: utility core
::
|%
++ transform
|= [a=tape fun=$-(@c @c)]
%- tufa
(turn (tuba a) fun)
::
++ to-upper
:> returns the uppercase of unicode codepoint {a}
|= a=@c
^- @c
:: special case ascii to not perform map lookup.
?: (lte a max-ascii)
?: &((gte a 'a') (lte a 'z'))
(sub a 32)
a
(apply-table a case-table %upper)
::
++ to-lower
:> returns the lowercase of unicode codepoint {a}
|= a=@c
^- @c
?: (lte a max-ascii)
?: &((gte a 'A') (lte a 'Z'))
(add 32 a)
a
(apply-table a case-table %lower)
::
++ apply-table
:> searches {table} and apples applies {type} to {a}.
:>
:> this recursively walks the case tree {table}. if it finds an entry which
:> matches on {a}, it will apply the offset. otherwise, returns {a}.
|= [a=@c table=case-tree:unicode-data type=?($upper $lower $title)]
^- @c
?~ table
a
?: (lth a start.n.table)
$(table l.table)
?: (gth a end.n.table)
$(table r.table)
?. &((lte start.n.table a) (lte a end.n.table))
a
%^ apply-offset a type
?- type
$upper upper.n.table
$lower lower.n.table
$title title.n.table
==
::
++ apply-offset
:> applies an character offset to {a}.
|= [a=@c type=?($upper $lower $title) offset=case-offset:unicode-data]
^- @c
?- offset
{$add *} (add a a.offset)
{$sub *} (sub a s.offset)
{$none *} a
::
{$uplo *}
?- type
$upper (sub a 1)
$lower (add a 1)
$title (sub a 1)
==
==
::
++ max-ascii `@c`0x7f
--
::
:: part 3: generator
::
:- %say
|= $: [now=@da eny=@uvJ bec=beak]
[n=tape $~]
$~
==
:- %tape (transform n to-upper)

71
gen/test.hoon Normal file
View File

@ -0,0 +1,71 @@
/+ new-hoon, tester
/= all-tests
/^ (map @ta tests:tester)
/: /===/tests
/_ /test-tree/
::
=, new-hoon
|%
::
++ test-runner
:> run all tests in {a} with a filter.
=| pax=path
|= [filter=path eny=@uvJ a=tests:tester]
^- tang
%- concat:ls
%+ turn a
|= b=instance:tester
^- tang
=^ matches filter (match-filter filter p.b)
?. matches
~
?- -.q.b
%& (run-test [p.b pax] eny p.q.b)
%| ^$(pax [p.b pax], a p.q.b)
==
::
++ run-test
:> executes an individual test.
|= [pax=path eny=@uvJ test=$-(@uvJ (list tape))]
^- tang
=+ name=(spud (flop pax))
=+ run=(mule |.((test eny)))
?- -.run
$| :: the stack is already flopped for output?
;: weld
p:run
`tang`[[%leaf (weld name " CRASHED")] ~]
==
$& ?: =(~ p:run)
[[%leaf (weld name " OK")] ~]
:: Create a welded list of all failures indented.
%- flop
;: weld
`tang`[[%leaf (weld name " FAILED")] ~]
%+ turn p:run
|= {i/tape}
^- tank
[%leaf (weld " " i)]
==
==
::
++ match-filter
:> checks to see if {name} matches the head of {filter}.
|= [filter=path name=term]
^- [? path]
?~ filter
:: when there's no filter, we always match.
[%.y ~]
[=(i.filter name) t.filter]
--
::
:- %say
|= $: [now=@da eny=@uvJ bec=beak]
[filter=$?($~ [pax=path $~])]
$~
==
:- %tang
%^ test-runner
?~ filter ~ pax.filter
eny
(test-map-to-test-list:tester all-tests)

1557
lib/new-hoon.hoon Normal file

File diff suppressed because it is too large Load Diff

165
lib/tester.hoon Normal file
View File

@ -0,0 +1,165 @@
/+ new-hoon
::
:> testing utilities
|%
:> # %models
+|
+= tests
:> a hierarchical structure of tests
:>
:> a recursive association list mapping a part of a path
:> to either a test trap or a sublist of the same type.
(list instance)
::
+= instance
:> a mapping between a term and part of a test tree.
(pair term (each $-(@uvJ (list tape)) tests))
::
:> # %generate
:> utilities for generating ++tests from files and directories.
+|
++ merge-base-and-recur
:> combine the current file and subdirectory.
:>
:> this merges the file {base} with its child files {recur}.
|= [base=vase recur=(map @ta tests:tester)]
^- tests
=+ a=(gen-tests base)
=+ b=(test-map-to-test-list recur)
:: todo: why does ++weld not work here? {a} and {b} are cast and have the
:: correct faces.
(welp a b)
::
++ test-map-to-test-list
:> translates ford output to something we can work with.
:>
:> ford gives us a `(map @ta tests:tester)`, but we actually
:> want something like ++tests.
|= a=(map @ta tests:tester)
:: todo: i'd like to sort this, but ++sort has -find.a problems much like
:: ++weld does above!?
^- tests
%+ turn
(to-list:dct:new-hoon a)
|= {key/@ta value/tests:tester}
[key [%| value]]
::
++ gen-tests
:> creates a {tests} list out of a vase of a test suite
|= v=vase
^- tests
=+ arms=(sort (sloe p.v) aor)
%+ turn arms
|= arm/term
:- arm
:- %&
|= eny=@uvJ
=+ context=(slop !>((init-test eny)) v)
=/ r (slap context [%cnsg [arm ~] [%$ 3] [[%$ 2] ~]])
((hard (list tape)) q:(slap r [%limb %results]))
::
:> # %per-test
:> data initialized on a per-test basis.
::
++ init-test
|= {cookie/@uvJ}
~(. tester `(list tape)`~ cookie 10 0)
::
++ tester-type _(init-test `@uvJ`0)
::
++ tester
|_ $: error-lines=(list tape) :< output messages
eny=@uvJ :< entropy
check-iterations=@u :< # of check trials
current-iteration=@u :< current iteration
==
:> #
:> # %check
:> #
:> gates for quick check style tests.
+|
+- check
|* [generator=$-(@uvJ *) test=$-(* ?)]
|-
^+ +>.$
?: (gth current-iteration check-iterations)
+>.$
:: todo: wrap generator in mule so it can crash.
=+ sample=(generator eny)
:: todo: wrap test in mule so it can crash.
=+ ret=(test sample)
?: ret
%= $
eny (shaf %huh eny) :: xxx: better random?
current-iteration (add current-iteration 1)
==
=+ case=(add 1 current-iteration)
=+ case-plural=?:(=(case 1) "case" "cases")
%= +>.$
error-lines :*
"falsified after {(noah !>(case))} {case-plural} by '{(noah !>(sample))}'"
error-lines
==
==
::
:: todo: a generate function that takes an arbitrary span.
::
++ generate-range
|= [min=@ max=@]
|= c=@uvJ
^- @
=+ gen=(random:new-hoon c)
=^ num gen (range:gen min max)
num
::
++ generate-dict
:> generator which will produce a dict with {count} random pairs.
|= count=@u
:> generate a dict with entropy {c}.
|= c=@uvJ
:>
:> gen: stateful random number generator
:> out: resulting map
:> i: loop counter
:>
=/ gen (random:new-hoon c)
=| out=(dict:new-hoon @ud @ud)
=| i=@u
|-
^- (dict:new-hoon @ud @ud)
?: =(i count)
out
=^ first gen (range:gen 0 100)
=^ second gen (range:gen 0 100)
$(out (put:dct:new-hoon out first second), i +(i))
:> #
:> # %test
:> #
:> test expectation functions
+|
:: todo: unit testing libraries have a lot more to them than just eq.
++ expect-eq
|* [a=* b=* c=tape]
^+ +>
?: =(a b)
+>.$
%= +>.$
error-lines :*
"failure: '{c}'"
" actual: '{(noah !>(a))}'"
" expected: '{(noah !>(b))}'"
error-lines
==
==
::
:> #
:> # %output
:> #
:> called by the test harness
::
++ results
:> returns results.
^- (list tape)
error-lines
--
--

79
mar/unicode-data.hoon Normal file
View File

@ -0,0 +1,79 @@
/- unicode-data
=, eyre
=, format
::
|_ all/(list line:unicode-data)
++ grab
:> converts from mark to unicode-data.
|%
++ mime |=([* a=octs] (txt (to-wain q.a))) :: XX mark translation
++ txt
|^ |= a=wain
^+ all
%+ murn a
|= b=cord
^- (unit line:unicode-data)
?~ b ~
`(rash b line)
::
:> parses a single character information line of the unicode data file.
++ line
;~ (glue sem)
hex :: code/@c codepoint in hex format
name-string :: name/tape character name
general-category :: gen/general type of character
(bass 10 (plus dit)) :: can/@ud canonical combining class
bidi-category :: bi/bidi bidirectional category
decomposition-mapping :: de/decomp decomposition mapping
::
:: todo: decimal/digit/numeric need to be parsed.
::
string-number :: decimal/tape decimal digit value (or ~)
string-number :: digit/tape digit value, even if non-decimal
string-number :: numeric/tape numeric value, including fractions
::
(flag 'Y' 'N') :: mirrored/? is char mirrored in bidi text?
name-string :: old-name/tape unicode 1.0 compatibility name
name-string :: iso/tape iso 10646 comment field
(punt hex) :: up/(unit @c) uppercase mapping codepoint
(punt hex) :: low/(unit @c) lowercase mapping codepoint
(punt hex) :: title/(unit @c) titlecase mapping codepoint
==
::
:> parses a single name or comment string.
++ name-string
%+ cook
|=(a=tape a)
(star ;~(less sem prn))
::
:> parses a unicode general category abbreviation to symbol
++ general-category
%+ sear (soft general:unicode-data)
:(cook crip cass ;~(plug hig low (easy ~)))
::
:> parses a bidirectional category abbreviation to symbol.
++ bidi-category
%+ sear (soft bidi:unicode-data)
:(cook crip cass (star hig))
::
++ decomposition-mapping
%- punt :: optional
:: a tag and a list of characters to decompose to
;~ plug
(punt (ifix [gal ;~(plug gar ace)] decomp-tag))
(cook |=(a=(list @c) a) (most ace hex))
==
::
++ decomp-tag
%+ sear (soft decomp-tag:unicode-data)
:(cook crip cass (star alf))
::
++ string-number
%+ cook
|=(a=tape a)
(star ;~(pose nud fas hep))
::
--
--
++ grad %txt
--

10
ren/test-tree.hoon Normal file
View File

@ -0,0 +1,10 @@
/+ tester
/= base /| /!noun/
/~ ~
==
/= recur /^ (map @ta tests:tester)
/| /_ /test-tree/
/~ ~
==
::
(merge-base-and-recur:tester !>(base) recur)

150
sur/unicode-data.hoon Normal file
View File

@ -0,0 +1,150 @@
|%
:> # %unicode-data
:> types to represent UnicdoeData.txt.
+|
++ line
:> an individual codepoint definition
:>
$: code=@c :< codepoint in hexadecimal format
name=tape :< character name
gen=general :< type of character this is
:> canonical combining class for ordering algorithms
can=@ud
bi=bidi :< bidirectional category of this character
de=decomp :< character decomposition mapping
:: todo: decimal/digit/numeric need to be parsed.
decimal=tape :< decimal digit value (or ~)
digit=tape :< digit value, covering non decimal radix forms
numeric=tape :< numeric value, including fractions
mirrored=? :< whether char is mirrored in bidirectional text
old-name=tape :< unicode 1.0 compatibility name
iso=tape :< iso 10646 comment field
up=(unit @c) :< uppercase mapping codepoint
low=(unit @c) :< lowercase mapping codepoint
title=(unit @c) :< titlecase mapping codepoint
==
::
++ general
:> one of the normative or informative unicode general categories
:>
:> these abbreviations are as found in the unicode standard, except
:> lowercased as to be valid symbols.
$? $lu :< letter, uppercase
$ll :< letter, lowercase
$lt :< letter, titlecase
$mn :< mark, non-spacing
$mc :< mark, spacing combining
$me :< mark, enclosing
$nd :< number, decimal digit
$nl :< number, letter
$no :< number, other
$zs :< separator, space
$zl :< separator, line
$zp :< separator, paragraph
$cc :< other, control
$cf :< other, format
$cs :< other, surrogate
$co :< other, private use
$cn :< other, not assigned
::
$lm :< letter, modifier
$lo :< letter, other
$pc :< punctuation, connector
$pd :< punctuation, dash
$ps :< punctuation, open
$pe :< punctuation, close
$pi :< punctuation, initial quote
$pf :< punctuation, final quote
$po :< punctuation, other
$sm :< symbol, math
$sc :< symbol, currency
$sk :< symbol, modifier
$so :< symbol, other
==
::
++ bidi
:> bidirectional category of a unicode character
$? $l :< left-to-right
$lre :< left-to-right embedding
$lri :< left-to-right isolate
$lro :< left-to-right override
$fsi :< first strong isolate
$r :< right-to-left
$al :< right-to-left arabic
$rle :< right-to-left embedding
$rli :< right-to-left isolate
$rlo :< right-to-left override
$pdf :< pop directional format
$pdi :< pop directional isolate
$en :< european number
$es :< european number separator
$et :< european number terminator
$an :< arabic number
$cs :< common number separator
$nsm :< non-spacing mark
$bn :< boundary neutral
$b :< paragraph separator
$s :< segment separator
$ws :< whitespace
$on :< other neutrals
==
::
++ decomp
:> character decomposition mapping.
:>
:> tag: type of decomposition.
:> c: a list of codepoints this decomposes into.
(unit {tag/(unit decomp-tag) c/(list @c)})
::
++ decomp-tag
:> tag that describes the type of a character decomposition.
$? $font :< a font variant
$nobreak :< a no-break version of a space or hyphen
$initial :< an initial presentation form (arabic)
$medial :< a medial presentation form (arabic)
$final :< a final presentation form (arabic)
$isolated :< an isolated presentation form (arabic)
$circle :< an encircled form
$super :< a superscript form
$sub :< a subscript form
$vertical :< a vertical layout presentation form
$wide :< a wide (or zenkaku) compatibility character
$narrow :< a narrow (or hankaku) compatibility character
$small :< a small variant form (cns compatibility)
$square :< a cjk squared font variant
$fraction :< a vulgar fraction form
$compat :< otherwise unspecified compatibility character
==
::
:> #
:> # %case-map
:> #
:> types to represent fast lookups of case data
+|
++ case-offset
:> case offsets can be in either direction
$% :> add {a} to get the new character
[%add a=@u]
:> subtract {a} to get the new character
[%sub s=@u]
:> take no action; return self
[%none $~]
:> represents series of alternating uppercase/lowercase characters
[%uplo $~]
==
::
++ case-node
:> a node in a case-tree.
:>
:> represents a range of
$: start=@ux
end=@ux
upper=case-offset
lower=case-offset
title=case-offset
==
::
++ case-tree
:> a binary search tree of ++case-node items, sorted on span.
(tree case-node)
--

280
tests/new-hoon/ls.hoon Normal file
View File

@ -0,0 +1,280 @@
/+ new-hoon, tester
=, ls:new-hoon
|_ tester-type:tester
++ test-head
(expect-eq (head [1 ~]) 1 "head")
::
++ test-last
(expect-eq (last:ls [1 2 ~]) 2 "last")
::
++ test-tail
(expect-eq (tail [1 2 3 ~]) [2 3 ~] "tail")
::
++ test-init
(expect-eq (init [1 2 3 ~]) [1 2 ~] "init")
::
++ test-size
(expect-eq (size ['a' 'b' 'c' ~]) 3 "size")
::
++ test-map
(expect-eq (map:ls [1 2 ~] |=(a/@ (add 1 a))) [2 3 ~] "map")
::
++ test-reverse
(expect-eq (reverse [1 2 3 ~]) [3 2 1 ~] "reverse")
::
++ test-intersperse
(expect-eq (intersperse 1 [5 5 5 ~]) [5 1 5 1 5 ~] "intersperse")
::
++ test-intercalate
%^ expect-eq
(intercalate "," ["one" "two" "three" ~])
["one,two,three"]
"intercalate"
::
++ test-transpose
%^ expect-eq
(transpose ~[~[1 2 3] ~[4 5 6]])
~[~[1 4] ~[2 5] ~[3 6]]
"transpose"
::
++ test-foldl
(expect-eq (foldl [1 2 3 ~] 3 |=({a/@ b/@} (add a b))) 9 "foldl")
::
++ test-foldr
(expect-eq (foldr [1 2 3 ~] 1 |=({a/@ b/@} (add a b))) 7 "foldr")
::
++ test-concat
(expect-eq (concat ~[~[1 2] ~[3 4]]) ~[1 2 3 4] "concat")
::
++ test-weld
(expect-eq (weld:ls ~[1 2 3] ~["one" "two"]) ~[1 2 3 "one" "two"] "weld")
::
++ test-any-true
(expect-eq (any [1 2 3 ~] |=(a/@ =(a 2))) %.y "any true")
::
++ test-any-false
(expect-eq (any [1 2 3 ~] |=(a/@ =(a 8))) %.n "any false")
::
++ test-all-true
(expect-eq (all [1 1 1 ~] |=(a/@ =(a 1))) %.y "all true")
::
++ test-all-false
(expect-eq (all [1 3 1 ~] |=(a/@ =(a 1))) %.n "all false")
::
++ test-scanl
%^ expect-eq
(scanl ~[1 2 3] 0 |=({a/@ b/@} (add a b)))
~[0 1 3 6]
"scanl"
::
++ test-scanl1
%^ expect-eq
(scanl1 ~[1 2 3] |=({a/@ b/@} (add a b)))
~[1 3 6]
"scanl1"
::
++ test-scanr
%^ expect-eq
(scanr ~[1 2 3] 0 |=({a/@ b/@} (add a b)))
~[6 5 3 0]
"scanr"
::
++ test-scanr1
%^ expect-eq
(scanr1 ~[1 2 3] |=({a/@ b/@} (add a b)))
~[6 5 3]
"scanr1"
::
++ test-map-foldl
%^ expect-eq
(map-foldl ~[1 2 3] 1 |=({a/@ b/@} [(add a b) (add 1 a)]))
[7 ~[2 3 5]]
"map-foldl"
::
++ test-map-foldr
%^ expect-eq
(map-foldr ~[1 2 3] 1 |=({a/@ b/@} [(add a b) (add 1 a)]))
[7 ~[7 5 2]]
"map-foldr"
::
++ test-unfoldr
%^ expect-eq
(unfoldr 5 |=(a/@ ?:(=(a 0) ~ `[a (dec a)])))
[5 4 3 2 1 ~]
"unfoldr"
::
++ test-take
%^ expect-eq
(take 3 ~[1 2 3 4 5])
[1 2 3 ~]
"take"
::
++ test-drop
%^ expect-eq
(drop:ls 3 ~[1 2 3 4 5])
[4 5 ~]
"drop"
::
++ test-split-at
%^ expect-eq
(split-at 3 ~[1 2 3 4 5])
[[1 2 3 ~] [4 5 ~]]
"split-at"
::
++ test-take-while
%^ expect-eq
(take-while ~[1 2 3 4 5] |=(a/@ (lth a 3)))
[1 2 ~]
"take-while"
::
++ test-drop-while
%^ expect-eq
(drop-while ~[1 2 3 4 5] |=(a/@ (lth a 3)))
[3 4 5 ~]
"drop-while"
::
++ test-drop-while-end
%^ expect-eq
(drop-while-end ~[5 5 1 5 5] |=(a/@ =(a 5)))
[5 5 1 ~]
"drop-while-end"
::
++ test-split-on
%^ expect-eq
(split-on ~[1 2 3 4 1 2 3 4] |=(a/@ (lth a 3)))
[[1 2 ~] [3 4 1 2 3 4 ~]]
"split-on"
::
++ test-break
%^ expect-eq
(break ~[1 2 3 4 1 2 3 4] |=(a/@ (gth a 3)))
[[1 2 3 ~] [4 1 2 3 4 ~]]
"break"
::
++ test-strip-prefix
%^ expect-eq
(strip-prefix "foo" "foobar")
[~ "bar"]
"break"
::
++ test-inits
%^ expect-eq
(inits "abc")
["a" "ab" "abc" ~]
"inits"
::
++ test-tails
%^ expect-eq
(tails "abc")
["abc" "bc" "c" ~]
"tails"
::
++ test-is-prefix-of
%^ expect-eq
(is-prefix-of "foo" "foobar")
%.y
"is-prefix-of"
::
++ test-is-suffix-of
%^ expect-eq
(is-suffix-of "bar" "foobar")
%.y
"is-suffix-of"
::
++ test-is-infix-of
%^ expect-eq
(is-infix-of "ob" "foobar")
%.y
"is-infix-of"
::
++ test-elem
%^ expect-eq
(elem 5 [1 2 3 4 5 ~])
%.y
"elem"
::
++ test-lookup
%^ expect-eq
(lookup "two" [["one" 1] ["two" 2] ["three" 3] ~])
[~ 2]
"lookup"
::
++ test-find
%^ expect-eq
(find:ls [3 2 1 5 1 2 3 ~] |=(a/@ (gth a 3)))
[~ 5]
"find"
::
++ test-filter
%^ expect-eq
(filter [1 2 1 2 1 ~] |=(a/@ =(a 2)))
[1 1 1 ~]
"filter"
::
++ test-partition
%^ expect-eq
(partition [1 2 1 2 1 ~] |=(a/@ =(a 2)))
[[2 2 ~] [1 1 1 ~]]
"partition"
::
++ test-elem-index
%^ expect-eq
(elem-index 2 [1 2 3 4 ~])
`1
"elem-index"
::
++ test-elem-indices
%^ expect-eq
(elem-indices 2 [1 2 1 2 ~])
[1 3 ~]
"elem-indices"
::
++ test-find-index
%^ expect-eq
(find-index [1 2 3 ~] |=(a/@ =(a 2)))
`1
"find-index"
::
++ test-find-indices
%^ expect-eq
(find-indices [1 2 1 2 ~] |=(a/@ =(a 2)))
[1 3 ~]
"find-indices"
::
++ test-zip
%^ expect-eq
(zip [[1 2 3 ~] [4 5 6 ~] [7 8 9 ~] ~])
[[1 4 7 ~] [2 5 8 ~] [3 6 9 ~] ~]
"zip"
::
++ test-unique
%^ expect-eq
(unique [1 2 3 1 2 3 ~])
[1 2 3 ~]
"unique"
::
++ test-delete
%^ expect-eq
(delete 2 [1 2 3 2 ~])
[1 3 2 ~]
"delete"
::
++ test-delete-firsts
%^ expect-eq
(delete-firsts [1 2 2 2 3 4 5 ~] [2 2 5 ~])
[1 2 3 4 ~]
"delete-firsts"
::
++ test-union
%^ expect-eq
(union [1 2 3 ~] [4 2 5 ~])
[1 2 3 4 5 ~]
"union"
::
++ test-intersect
%^ expect-eq
(intersect [5 6 6 7 8 ~] [9 8 8 6 ~])
[6 6 8 ~]
"intersect"
--

360
tests/new-hoon/mp.hoon Normal file
View File

@ -0,0 +1,360 @@
/+ new-hoon, tester
=, dct:new-hoon
=+ four=(from-list [[1 "one"] [2 "two"] [3 "three"] [4 "four"] ~])
=+ three=(from-list [[1 "one"] [2 "two"] [3 "three"] ~])
|_ tester-type:tester
++ test-empty
(expect-eq (empty four) %.n "empty")
::
++ test-size
(expect-eq (size four) 4 "size")
::
++ test-member
(expect-eq (member four 4) %.y "member")
::
++ test-put-with
=+ ints=(from-list [["one" 1] ["two" 2] ["three" 3] ["four" 4] ~])
%^ expect-eq
(put-with ints "three" 2 add)
(from-list [["one" 1] ["two" 2] ["three" 5] ["four" 4] ~])
"put-with"
::
++ test-put-with-key
%^ expect-eq
(put-with-key four 4 "four" |=({a/@ud b/tape c/tape} (weld (scow %ud a) b)))
(from-list [[1 "one"] [2 "two"] [3 "three"] [4 "4four"] ~])
"put-with-key"
::
++ test-put-lookup-with-key
%^ expect-eq
%- put-lookup-with-key :^
four
4
"five"
|=({key/@ud old/tape new/tape} new)
:- `"four"
(from-list [[1 "one"] [2 "two"] [3 "three"] [4 "five"] ~])
"put-lookup-with-key"
::
++ test-delete
%^ expect-eq
(delete four 4)
three
"delete"
::
++ test-adjust
%^ expect-eq
%^ adjust
four
3
|=(a/tape (weld "this" a))
(from-list [[1 "one"] [2 "two"] [3 "thisthree"] [4 "four"] ~])
"adjust"
::
++ test-adjust-with-key
%^ expect-eq
%^ adjust-with-key
four
3
|=({a/@ud b/tape} (weld (scow %ud a) b))
(from-list [[1 "one"] [2 "two"] [3 "3three"] [4 "four"] ~])
"adjust-with-key"
::
++ test-update
%^ expect-eq
%^ update
four
3
|=(a/tape `(maybe tape)`~)
(from-list [[1 "one"] [2 "two"] [4 "four"] ~])
"update"
::
++ test-update-with-key
%^ expect-eq
%^ update-with-key
four
3
|=({a/@u b/tape} `(maybe tape)`[~ (weld (scow %ud a) b)])
(from-list [[1 "one"] [2 "two"] [3 "3three"] [4 "four"] ~])
"update-with-key"
::
++ test-alter-as-add
%^ expect-eq
%^ alter
four
5
|=(a/(maybe tape) `(maybe tape)`[~ "five"])
(from-list [[1 "one"] [2 "two"] [3 "three"] [4 "four"] [5 "five"] ~])
"alter (as add)"
::
++ test-alter-as-delete
%^ expect-eq
%^ alter
four
2
|=(a/(maybe tape) `(maybe tape)`~)
(from-list [[1 "one"] [3 "three"] [4 "four"] ~])
"alter (as delete)"
::
++ test-alter-as-change
%^ expect-eq
%^ alter
four
2
|=(a/(maybe tape) `(maybe tape)`[~ "dos"])
(from-list [[1 "one"] [2 "dos"] [3 "three"] [4 "four"] ~])
"alter (as change)"
::
++ check-alter
:: check random dicts of 50 items with 40 random operations done on them
:: for validity.
%+ check
(generate-dict 50)
|= a/(dict @ud @ud)
:: this is dumb, but use {a} as entropy?
=/ gen (random:new-hoon (jam a))
=| i/@u
|-
?: =(i 40)
%.y
=^ key gen (range:gen 0 100)
=^ value gen (range:gen 0 100)
=. a %^ alter-with-key a key
|= {key/@ud current/(maybe @ud)}
^- (maybe @ud)
=+ action=(mod key 2)
?: =(action 0) :: return nothing
~
?: =(action 1) :: add/set value
`value
~ :: impossible
?. (valid a)
%.n
$(i +(i))
::
++ test-union
%^ expect-eq
%+ union
(from-list [[1 "left"] [2 "left"] ~])
(from-list [[2 "right"] [3 "right"] ~])
(from-list [[1 "left"] [2 "left"] [3 "right"] ~])
"union"
::
++ test-union-with
%^ expect-eq
%^ union-with
(from-list [[1 "left"] [2 "left"] ~])
(from-list [[2 "right"] [3 "right"] ~])
|=({a/tape b/tape} (weld a b))
(from-list [[1 "left"] [2 "leftright"] [3 "right"] ~])
"union-with"
::
++ test-union-with-key
%^ expect-eq
%^ union-with-key
(from-list [[1 "left"] [2 "left"] ~])
(from-list [[2 "right"] [3 "right"] ~])
|=({a/@ud b/tape c/tape} :(weld `tape`(scow %ud a) b c))
(from-list [[1 "left"] [2 "2leftright"] [3 "right"] ~])
"union-with-key"
::
++ test-map
%^ expect-eq
%+ map:dct
three
crip
(from-list [[1 'one'] [2 'two'] [3 'three'] ~])
"map"
::
++ test-map-with-key
%^ expect-eq
%+ map-with-key
three
|=({a/@u b/tape} (weld (scow %ud a) b))
(from-list [[1 "1one"] [2 "2two"] [3 "3three"] ~])
"map-with-key"
::
++ test-map-fold
%^ expect-eq
%^ map-fold
three
"Everything: "
|= {accumulator/tape value/tape}
[(weld accumulator value) (weld value "X")]
:- "Everything: twoonethree"
(from-list [[1 "oneX"] [2 "twoX"] [3 "threeX"] ~])
"map-fold"
::
++ test-map-keys
%^ expect-eq
%+ map-keys
three
|= a/@u
(add a 10)
(from-list [[11 "one"] [12 "two"] [13 "three"] ~])
"map-keys"
::
++ test-map-keys-with
%^ expect-eq
%^ map-keys-with
three
|=(a/@u 42)
weld
(from-list [[42 "twothreeone"] ~])
"map-keys-with"
::
++ test-fold
%^ expect-eq
%^ fold
three
"Everything: "
:: todo: this works but replacing with just ++weld causes an out of loom.
|= {accumulator/tape value/tape}
^- tape
(weld accumulator value)
"Everything: twoonethree"
"map-fold"
::
++ test-fold-with-keys
%^ expect-eq
%^ fold-with-keys
three
"Everything: "
|= {accumulator/tape key/@u value/tape}
^- tape
:(weld accumulator (scow %ud key) value)
"Everything: 2two1one3three"
"map-fold-with-keys"
::
++ test-elems
%^ expect-eq
(elems three)
["two" "three" "one" ~]
"elems"
::
++ test-keys
%^ expect-eq
(keys three)
[2 3 1 ~]
"keys"
::
++ test-keys-set
%^ expect-eq
(keys-set three)
(si:nl [2 3 1 ~])
"keys-set"
::
++ test-from-set
%^ expect-eq
%+ from-set
(si:nl [1 2 3 ~])
|= a/@u
(scow %ud a)
(from-list [[1 "1"] [2 "2"] [3 "3"] ~])
"from-set"
::
++ test-from-list-with
%^ expect-eq
%+ from-list-with
[[1 1] [2 1] [2 1] [3 3] ~]
add
(from-list [[1 1] [2 2] [3 3] ~])
"from-list-with"
::
++ test-filter
%^ expect-eq
%+ filter
(from-list [[1 1] [2 1] [3 2] [4 1] ~])
|=(a/@u !=(a 1))
(from-list [[1 1] [2 1] [4 1] ~])
"filter"
::
++ test-filter-with-key
%^ expect-eq
%+ filter-with-key
(from-list [[1 1] [2 1] [3 2] [4 1] ~])
|=({a/@u b/@u} =(a 2))
(from-list [[1 1] [3 2] [4 1] ~])
"filter-with-key"
::
++ test-restrict-keys
%^ expect-eq
%+ restrict-keys
(from-list [[1 1] [2 2] [3 3] [4 4] [5 5] ~])
(si:nl [1 3 5 ~])
(from-list [[1 1] [3 3] [5 5] ~])
"restrict-keys"
::
++ test-without-keys
%^ expect-eq
%+ without-keys
(from-list [[1 1] [2 2] [3 3] [4 4] [5 5] ~])
(si:nl [1 3 5 ~])
(from-list [[2 2] [4 4] ~])
"restrict-keys"
::
++ test-partition
%^ expect-eq
%+ partition
(from-list [[1 1] [2 2] [3 3] [4 4] [5 5] ~])
|=(a/@u |(=(a 1) =(a 3)))
:- (from-list [[1 1] [3 3] ~])
(from-list [[2 2] [4 4] [5 5] ~])
"partition"
::
++ test-map-maybe
%^ expect-eq
%+ map-maybe
(from-list [[1 1] [2 2] [3 3] [4 4] [5 5] ~])
|=(a/@u ?:(=(a 3) ~ `a))
(from-list [[1 1] [2 2] [4 4] [5 5] ~])
"map-maybe"
::
++ test-map-maybe-with-key
%^ expect-eq
%+ map-maybe-with-key
(from-list [[1 2] [2 3] [3 4] [4 5] [5 6] ~])
|=({k/@u v/@u} ?:(=(k 3) ~ `v))
(from-list [[1 2] [2 3] [4 5] [5 6] ~])
"map-maybe-with-key"
::
++ test-map-either
%^ expect-eq
%+ map-either
(from-list [[1 1] [2 2] [3 3] [4 4] [5 5] ~])
|= value/@u
?: =(0 (mod value 2))
[%& "even"]
[%| 1]
:- (from-list [[2 "even"] [4 "even"] ~])
(from-list [[1 1] [3 1] [5 1] ~])
"map-either"
::
++ test-map-either-with-key
%^ expect-eq
%+ map-either-with-key
(from-list [[1 1] [2 1] [3 1] [4 1] [5 1] ~])
|= {key/@u value/@u}
?: =(0 (mod key 2))
[%& "even"]
[%| 1]
:- (from-list [[2 "even"] [4 "even"] ~])
(from-list [[1 1] [3 1] [5 1] ~])
"map-either"
::
++ test-is-subdict
%^ expect-eq
%^ is-subdict-by
(from-list [[1 1] [4 4] ~])
(from-list [[1 1] [2 2] [3 3] [4 4] [5 5] ~])
|=({a/* b/*} =(a b))
%.y
"is-subdict"
::
++ test-valid
%^ expect-eq
(valid (from-list [[1 1] [2 2] [3 3] [4 4] [5 5] [6 6] [7 7] [8 8] [9 9] ~]))
%.y
"valid"
--

32
tests/new-hoon/myb.hoon Normal file
View File

@ -0,0 +1,32 @@
/+ new-hoon, tester
=, myb:new-hoon
|_ tester-type:tester
++ test-from-list-null
(expect-eq (from-list ~) ~ "from-list")
::
++ test-from-list-real
(expect-eq (from-list [5 ~]) [~ 5] "from-list")
::
++ test-to-list-null
(expect-eq (to-list ~) ~ "to-list")
::
++ test-to-list-real
(expect-eq (to-list [~ 5]) [5 ~] "to-list")
::
++ test-concat-null
(expect-eq (concat ~) ~ "concat")
::
++ test-concat-real
:: wait, if i pull the cast out from below, the concat implementation
:: doesn't compile anymore?
(expect-eq (concat `(list (maybe @ud))`[~ [~ 1] ~ [~ 2] ~]) [1 2 ~] "concat")
::
++ test-map
%^ expect-eq
%+ map:myb
[1 2 3 2 ~]
|=(a/@u ?:(=(2 a) [~ 2] ~))
[2 2 ~]
"map"
--

32
tests/new-hoon/thr.hoon Normal file
View File

@ -0,0 +1,32 @@
:: tests for the either core.
/+ new-hoon, tester
=, thr:new-hoon
=/ data/(list (either @u tape)) [[%& 1] [%| "one"] [%& 2] [%| "two"] ~]
|_ tester-type:tester
++ test-apply
%^ expect-eq
%^ apply
`(either @u tape)`[%| "one"]
|=(a/@u "left")
|=(b/tape "right")
"right"
"apply"
::
++ test-firsts
%^ expect-eq
(firsts data)
[1 2 ~]
"firsts"
::
++ test-seconds
%^ expect-eq
(seconds data)
["one" "two" ~]
"seconds"
::
++ test-partition
%^ expect-eq
(partition data)
[[1 2 ~] ["one" "two" ~]]
"partition"
--

58
web/testing.umd Normal file
View File

@ -0,0 +1,58 @@
:- ~[comments+&]
;>
# Writing Unit Tests
Urbit comes with a built in system for writing tests. Like hoon files with a
certain shape go in `%/app` or `%/gen` or `%/mar`, hoon files with a certain
shape can go in `%/tests` and then are exposed to a system wide test runner.
Say you put a test suite in `%/tests/new-hoon/thr.hoon`:
```
> +ls %/tests
new-hoon/
> +ls %/tests/new-hoon
ls/hoon mp/hoon myb/hoon thr/hoon
```
You can then just run that individual test suite (and not the ones that are beside it in the `%/tests/new-hoon` directory) with:
```
> +tests /new-hoon/thr
/new-hoon/thr/test-seconds OK
/new-hoon/thr/test-partition OK
/new-hoon/thr/test-firsts OK
/new-hoon/thr/test-apply OK
```
## The test file
So what is the structure of these test files? They contain a door, with arms starting with `++test-` or `++check-`. At minimum:
```
/+ tester
|_ tester-type:tester
++ test-some-test
(expect-eq 4 4 "trivial")
--
```
All of the utilities you need to write tests are in the tester library. Also, like other hoon files, you can stack cores for models and utility functions with only the final core being inspected for test arms.
## Some Details
So internally, how does this work?
The `+test` generator depends on each file/directory in `%/tests/` through a renderer. Each node in the filesystem tree is rendered by `%/ren/test-tree.hoon`, which calls itself recursively for subdirectories.
This means all compiling of test cases happens inside ford, which can cache work and not recompile tests whose dependencies haven't changed. At runtime, all the `+test` generator does is filter and execute tests from the tree.
I would like to get to a place where any direct scrying of the filesystem is discouraged, and almost everything flows through the functional reactive build system. This is what it is here for.
### Future distribution of hoon libraries
Implicit in having a standard way to write tests and a standard `+test` runner is the idea that all functionality on the current desk should be tested.
Let's say I'm shipping a program on Urbit and I use multiple third-party libraries. Each of those libraries should have their own test suites placed in `%/tests/`. When I `|merge` their desks into my application desk, having a standard test runner means that all their tests and all my application tests get run. If you're depending on a library, you want to make sure that the tests for your dependencies run when you test your application.