|
|
|
@ -685,8 +685,8 @@ each in detail.
|
|
|
|
|
These two kisses are nearly identical. At a high level, they apply changes to
|
|
|
|
|
the filesystem. Whenever we add, remove, or edit a file, one of these cards is
|
|
|
|
|
sent. The `p` is the ship whose filesystem we're trying to change, the `q` is
|
|
|
|
|
the desk we're changing, and the `r` is the request change. For the format of
|
|
|
|
|
the requested change, see the documentation for `++nori` above.
|
|
|
|
|
the desk we're changing, and the `r` is the requested change. For the format
|
|
|
|
|
of the requested change, see the documentation for `++nori` above.
|
|
|
|
|
|
|
|
|
|
When a file is changed in the unix filesystem, vere will send a `%into` kiss.
|
|
|
|
|
This tells clay that the duct over which the kiss was sent is the duct that
|
|
|
|
@ -2035,3 +2035,397 @@ producing it if it has; else, we call `++blub` since no more data can be
|
|
|
|
|
produced over this subscription.
|
|
|
|
|
|
|
|
|
|
This concludes our discussion of foreign requests.
|
|
|
|
|
|
|
|
|
|
Lifecycle of a Local Write
|
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
|
|
There are two kisses that cause a local write: `%info` and `%into`. These are
|
|
|
|
|
exactly identical except that `%into` resets the the sync duct in clay, so it
|
|
|
|
|
ought only to be called from unix. Within arvo, we call `%info`.
|
|
|
|
|
|
|
|
|
|
Both are handled in `++call`.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
?(%info %into)
|
|
|
|
|
?: =(%$ q.q.hic)
|
|
|
|
|
?. ?=(%into -.q.hic) [~ ..^$]
|
|
|
|
|
=+ yar=(need (~(get by fat.ruf) p.q.hic))
|
|
|
|
|
[~ ..^$(fat.ruf (~(put by fat.ruf) p.q.hic yar(hez [~ hen])))]
|
|
|
|
|
=^ mos ruf
|
|
|
|
|
=+ une=(un p.q.hic now ruf)
|
|
|
|
|
=+ ^= zat
|
|
|
|
|
(exec:(di:wake:une q.q.hic) hen now r.q.hic)
|
|
|
|
|
=+ zot=abet.zat
|
|
|
|
|
:- -.zot
|
|
|
|
|
=. une (pish:une q.q.hic +.zot ran.zat)
|
|
|
|
|
abet:une(hez.yar ?.(=(%into -.q.hic) hez.yar.une [~ hen]))
|
|
|
|
|
[mos ..^$]
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Recall that in the kiss (`q.hic`) the `p` is the ship whose filesystem we're
|
|
|
|
|
trying to change, the `q` is the desk we're changing, and the `r` is the
|
|
|
|
|
requested change.
|
|
|
|
|
|
|
|
|
|
If `q`, the desk name, is empty, then we don't make any actual changes to the
|
|
|
|
|
filesystem. In the case of `%info`, we do nothing at all. For the `%into`
|
|
|
|
|
kiss, we simply set the sync duct to the duct we received this kiss on. This
|
|
|
|
|
allows us to set the sync duct without making a change to our filesystem.
|
|
|
|
|
|
|
|
|
|
Otherwise, we construct the core for a local ship with `++un` and for the local
|
|
|
|
|
desk with `++di`, as described above. We then apply the change with
|
|
|
|
|
`++exec:de`, which contains the meat of the write functionality. Afterward, we
|
|
|
|
|
call `++abet:de` to resolve our changes to the desk and `++pish:un` and
|
|
|
|
|
`++abet:un` to resolve our changes to the ship, as described above. Again, if
|
|
|
|
|
this is a `%info` kiss, then we don't change the sync duct; else, we set it to
|
|
|
|
|
the calling duct.
|
|
|
|
|
|
|
|
|
|
The interesting call here was, of course, `++exec:de`.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ exec :: change and update
|
|
|
|
|
|= [hen=duct wen=@da lem=nori]
|
|
|
|
|
^+ +>
|
|
|
|
|
(echo:wake:(edit wen lem) hen wen lem)
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
First, we call `++edit` to apply our changes, then we call `++wake` to push out
|
|
|
|
|
any new updates to our subscribers. Finally, we call `++echo` to announce our
|
|
|
|
|
changes to both unix and the terminal.
|
|
|
|
|
|
|
|
|
|
We have described `++wake` above, so we'll discuss `++edit` and `++echo` here.
|
|
|
|
|
Since `++echo` is significantly simpler, we'll start with it.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ echo :: announce changes
|
|
|
|
|
|= [hen=duct wen=@da lem=nori]
|
|
|
|
|
^+ +>
|
|
|
|
|
%= +>
|
|
|
|
|
vag ?~(hez vag :_(vag [u.hez [%ergo who syd let.dom]]))
|
|
|
|
|
yel
|
|
|
|
|
=+ pre=`path`~[(scot %p for) syd (scot %ud let.dom)]
|
|
|
|
|
?- -.lem
|
|
|
|
|
| :_ yel
|
|
|
|
|
[hen %note '=' %leaf :(weld (trip p.lem) " " (spud pre))]
|
|
|
|
|
& |- ^+ yel
|
|
|
|
|
?~ q.q.lem yel
|
|
|
|
|
:_ $(q.q.lem t.q.q.lem)
|
|
|
|
|
:- hen
|
|
|
|
|
:+ %note
|
|
|
|
|
?-(-.q.i.q.q.lem %del '-', %ins '+', %mut ':')
|
|
|
|
|
[%leaf (spud (weld pre p.i.q.q.lem))]
|
|
|
|
|
==
|
|
|
|
|
==
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
If we have a sync duct, then we push out a `%ergo` gift along it so that unix
|
|
|
|
|
knows there has been a change to the filesystem and can update the copy on the
|
|
|
|
|
unix filesystem.
|
|
|
|
|
|
|
|
|
|
Additionally, we push out a `%note` gift to the terminal duct to display the
|
|
|
|
|
new changes to the user. This is responsible for the printed lines we see when
|
|
|
|
|
a file is added, removed, or modified.
|
|
|
|
|
|
|
|
|
|
It remains to discuss `++edit:de`.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ edit :: apply changes
|
|
|
|
|
|= [wen=@da lem=nori]
|
|
|
|
|
^+ +>
|
|
|
|
|
=+ axe=(~(edit ze lim dom ran) wen lem)
|
|
|
|
|
=+ `[l=@da d=dome r=rang]`+<.axe
|
|
|
|
|
+>.$(dom d, ran r)
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
We very simply call `++edit:ze` and apply the resultant dome and rang back into
|
|
|
|
|
ourself. As we should expect, the actual handling of the changes themselves
|
|
|
|
|
is delegated to `++ze` in `arvo/zuse.hoon`.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ edit :: edit:ze
|
|
|
|
|
|= [wen=@da lem=nori] :: edit
|
|
|
|
|
^+ +>
|
|
|
|
|
?- -.lem
|
|
|
|
|
& =^ yak lat :: merge objects
|
|
|
|
|
%+ forge-yaki wen
|
|
|
|
|
?: =(let 0) :: initial import
|
|
|
|
|
[~ q.lem]
|
|
|
|
|
[(some r:(aeon-to-yaki let)) q.lem]
|
|
|
|
|
?. ?| =(0 let)
|
|
|
|
|
!=((lent p.yak) 1)
|
|
|
|
|
!(equiv q.yak q:(aeon-to-yaki let))
|
|
|
|
|
==
|
|
|
|
|
+>.$ :: silently ignore
|
|
|
|
|
=: let +(let)
|
|
|
|
|
hit (~(put by hit) +(let) r.yak)
|
|
|
|
|
hut (~(put by hut) r.yak yak)
|
|
|
|
|
==
|
|
|
|
|
+>.$(ank (checkout-ankh q.yak))
|
|
|
|
|
| +>.$(lab ?<((~(has by lab) p.lem) (~(put by lab) p.lem let)))
|
|
|
|
|
==
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Two kinds of changes may be made to a filesystem: we can modify the contents
|
|
|
|
|
or we can label a revision.
|
|
|
|
|
|
|
|
|
|
Labeling a revision (the `|` case) is much simpler. We first assert that the
|
|
|
|
|
label doesn't already exist. Then, we put in `lab` in our dome the label
|
|
|
|
|
associated with the current revision number.
|
|
|
|
|
|
|
|
|
|
In the `&` case, we're actually modifying the contents of the filesystem.
|
|
|
|
|
First, we create the commit in `++forge-yaki` by applying the given changes to
|
|
|
|
|
our current revision. This also updates `lat` in our rang with the new data
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
Unless either this is the initial import, the generated yaki doesn't have
|
|
|
|
|
exactly one parent, or the data in the generated yaki is the same as that in
|
|
|
|
|
our current revision, we silently ignore the request. Note that this only
|
|
|
|
|
allows changes that don't affect the contents of the filesystem if this is a
|
|
|
|
|
merge.
|
|
|
|
|
|
|
|
|
|
If one of the conditions does hold, then we apply the generated commit. We
|
|
|
|
|
increment `let`, the revision number of our head; associate the new revision
|
|
|
|
|
number with the hash of the new commit; and put the new commit in `hut`.
|
|
|
|
|
Finally, we update our current ankh by checking out the new commit.
|
|
|
|
|
|
|
|
|
|
We discussed `++checkout-ankh` above, so it remains only to discuss
|
|
|
|
|
`++forge-yaki` and `++equiv`. We begin with the simpler, `++equiv:ze`.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ equiv :: test paths
|
|
|
|
|
|= [p=(map path lobe) q=(map path lobe)]
|
|
|
|
|
^- ?
|
|
|
|
|
=- ?. qat %.n
|
|
|
|
|
%+ levy (~(tap by q) ~)
|
|
|
|
|
|= [pat=path lob=lobe]
|
|
|
|
|
(~(has by p) pat)
|
|
|
|
|
^= qat
|
|
|
|
|
%+ levy (~(tap by p) ~)
|
|
|
|
|
|= [pat=path lob=lobe]
|
|
|
|
|
=+ zat=(~(get by q) pat)
|
|
|
|
|
?~ zat %.n
|
|
|
|
|
=((lobe-to-noun u.zat) (lobe-to-noun lob))
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
We're checking to see if the data in both filesystem trees is identical. We
|
|
|
|
|
start by going through `p` and checking to see if (1) the path exists in `q`
|
|
|
|
|
and (2) the data is the same as in `q`.
|
|
|
|
|
|
|
|
|
|
This shows that `q` is a superset of `p`. To show that `p` and `q` are
|
|
|
|
|
equivalent, we have to make sure there is nothing in `q` that is not also in
|
|
|
|
|
`p`. Once we've done that, we know `p` and `q` are equivalent.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ forge-yaki :: forge-yaki:ze
|
|
|
|
|
|= [wen=@da par=(unit tako) lem=soba] :: forge yaki
|
|
|
|
|
=+ ^= per
|
|
|
|
|
?~ par ~
|
|
|
|
|
~[u.par]
|
|
|
|
|
=+ gar=(update-lat (apply-changes q.lem) lat)
|
|
|
|
|
:- %^ make-yaki per +.gar wen :: from existing diff
|
|
|
|
|
-.gar :: fix lat
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Here, we first make `per`, our list of parents. If we have a parent, we put it
|
|
|
|
|
in the list, else the list is empty. Simple.
|
|
|
|
|
|
|
|
|
|
We then apply the changes and update `lat`, our object store. Finally, we make
|
|
|
|
|
a yaki out of the generated change information and produce both it and the new
|
|
|
|
|
object store.
|
|
|
|
|
|
|
|
|
|
In increasing order of complexity, the new arms here are `++make-yaki`,
|
|
|
|
|
`++update-lat`, and `++make-yaki`.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ make-yaki :: make yaki
|
|
|
|
|
|= [p=(list tako) q=(map path lobe) t=@da]
|
|
|
|
|
^- yaki
|
|
|
|
|
=+ ^= has
|
|
|
|
|
%^ cat 7 (sham [%yaki (roll p add) q t])
|
|
|
|
|
(sham [%tako (roll p add) q t])
|
|
|
|
|
[p q has t]
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
We're given almost everything we need to make a yaki, so we just need to
|
|
|
|
|
generate the hash of the new yaki. We take a noun hash of a noun that depends
|
|
|
|
|
on the hashes of the parents, the data at this commit, and the date of the
|
|
|
|
|
commit. Note that this means two identical changes made on the same parents at
|
|
|
|
|
different times will have different hashes.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ update-lat :: update-lat:ze
|
|
|
|
|
|= [lag=(map path blob) sta=(map lobe blob)] :: fix lat
|
|
|
|
|
^- [(map lobe blob) (map path lobe)]
|
|
|
|
|
%+ roll (~(tap by lag) ~)
|
|
|
|
|
|= [[pat=path bar=blob] [lut=_sta gar=(map path lobe)]]
|
|
|
|
|
?~ (~(has by lut) p.bar)
|
|
|
|
|
[lut (~(put by gar) pat p.bar)]
|
|
|
|
|
:- (~(put by lut) p.bar bar)
|
|
|
|
|
(~(put by gar) pat p.bar)
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
We're given a map of paths directly to their contents, but we wish to have both
|
|
|
|
|
a map from paths to hashes of their contents and a map from hashes to the
|
|
|
|
|
content itself. We're given an initial map of the second kind, but when
|
|
|
|
|
applying the changes, we may add new content which is not yet stored here.
|
|
|
|
|
|
|
|
|
|
We roll over the given map from paths to data and, if the data is already in
|
|
|
|
|
our store, then we simply add a reference to the hash in the map from paths to
|
|
|
|
|
hashes. Otherwise, we also have to add the entry in the map from hashes to
|
|
|
|
|
data.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ apply-changes :: apply-changes:ze
|
|
|
|
|
|= lar=(list ,[p=path q=miso]) :: store changes
|
|
|
|
|
^- (map path blob)
|
|
|
|
|
=+ ^= hat :: current state
|
|
|
|
|
?: =(let 0) :: initial commit
|
|
|
|
|
~ :: has nothing
|
|
|
|
|
=< q
|
|
|
|
|
%- aeon-to-yaki
|
|
|
|
|
let
|
|
|
|
|
=- =+ sar=(sa (turn lar |=([p=path *] p))) :: changed paths
|
|
|
|
|
%+ roll (~(tap by hat) ~) :: find unchanged
|
|
|
|
|
|= [[pat=path gar=lobe] bat=_bar]
|
|
|
|
|
?: (~(has in sar) pat) :: has update
|
|
|
|
|
bat
|
|
|
|
|
(~(put by bat) pat (lobe-to-blob gar)) :: use original
|
|
|
|
|
^= bar ^- (map path blob)
|
|
|
|
|
%+ roll lar
|
|
|
|
|
|= [[pat=path mys=miso] bar=(map path blob)]
|
|
|
|
|
^+ bar
|
|
|
|
|
?- -.mys
|
|
|
|
|
%ins :: insert if not exist
|
|
|
|
|
?: (~(has by bar) pat) !! ::
|
|
|
|
|
?: (~(has by hat) pat) !! ::
|
|
|
|
|
(~(put by bar) pat (make-direct p.mys %c)) :: TODO content type?
|
|
|
|
|
%del :: delete if exists
|
|
|
|
|
?. |((~(has by hat) pat) (~(has by bar) pat)) !!
|
|
|
|
|
(~(del by bar) pat)
|
|
|
|
|
%mut :: mutate, must exist
|
|
|
|
|
=+ ber=(~(get by bar) pat)
|
|
|
|
|
?~ ber
|
|
|
|
|
=+ har=(~(get by hat) pat)
|
|
|
|
|
?~ har !!
|
|
|
|
|
%+ ~(put by bar) pat
|
|
|
|
|
(make-delta u.har p.mys)
|
|
|
|
|
%+ ~(put by bar) pat
|
|
|
|
|
(make-delta p.u.ber p.mys)
|
|
|
|
|
==
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
We let `hat` be the state of our head. We let `bar` be the new state of
|
|
|
|
|
the files we touch in our changes, and then we add in the unchanged files.
|
|
|
|
|
|
|
|
|
|
To compute `bar`, we go through each change, handling each one individually.
|
|
|
|
|
If the change is an insert, then we first assert that the file doesn't already
|
|
|
|
|
exist and that we haven't already added it in this changeset. Note that this
|
|
|
|
|
means it is impossible to delete a file and then insert it again in the same
|
|
|
|
|
changeset. If this is indeed a new file, then put the path into `bar`,
|
|
|
|
|
associated with its data blob, as calculated by `++make-direct`.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ make-direct :: make blob
|
|
|
|
|
|= [p=* q=umph]
|
|
|
|
|
^- blob
|
|
|
|
|
[%direct (mug p) p q]
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
We're given everything we need to create a `%direct` blob except the hash,
|
|
|
|
|
which we calculate as the simple mug of the file contents.
|
|
|
|
|
|
|
|
|
|
In the case of a delete, we first assert that the file exists in either the
|
|
|
|
|
current head or our new changes. Note that it is possible to insert a file and
|
|
|
|
|
then delete it in the same changeset. If the file does exist, then we remove
|
|
|
|
|
it from `bar`.
|
|
|
|
|
|
|
|
|
|
Finally, in the case of a mutation, we try to get the current state of the file
|
|
|
|
|
from our new changes in `bar`. If it's not there, then we assert that the file
|
|
|
|
|
exists in our current head (it must, after all, if we're changing it), and we
|
|
|
|
|
make a `%delta` blob out of the difference between the old contents and the new
|
|
|
|
|
contents. If the file is in `bar`, then make the `%delta` blob as a change
|
|
|
|
|
from from the contents already in `bar` to the new contents. This means it is
|
|
|
|
|
possible to have multiple mutations to a file in the same changeset.
|
|
|
|
|
|
|
|
|
|
After we've computed the contents of modified files, we must add all the
|
|
|
|
|
unmodified files. We might naively suppose that `(~(uni by hat) bar)` would do
|
|
|
|
|
this, but this would add back all the deleted files. To get around this, we
|
|
|
|
|
let `sar` be the changed files, and then we simply roll over the files at our
|
|
|
|
|
current head, adding everything that isn't in `sar`.
|
|
|
|
|
|
|
|
|
|
This concludes our discussion of a local write.
|
|
|
|
|
|
|
|
|
|
Lifecycle of a Local Merge
|
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
|
|
Merges are pretty simple from the perspective of clay. A `%merg` kiss is sent
|
|
|
|
|
with already-generated merge state, and we simply apply the new state. The
|
|
|
|
|
question of how the merge is generated is much more complicated, but it is also
|
|
|
|
|
out of the scope of this section. If you're interested, take a look at
|
|
|
|
|
`++construct-merge:ze`.
|
|
|
|
|
|
|
|
|
|
We've seen most of the arms involved, so we'll go through most of it pretty
|
|
|
|
|
quickly. In `++call` we handle the `%merg` kiss.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
%merg :: direct state up
|
|
|
|
|
=^ mos ruf
|
|
|
|
|
=+ une=(un p.q.hic now ruf)
|
|
|
|
|
=+ ^= zat
|
|
|
|
|
(exem:(di:wake:une q.q.hic) hen now r.q.hic)
|
|
|
|
|
=+ zot=abet.zat
|
|
|
|
|
:- -.zot
|
|
|
|
|
=. une (pish:une q.q.hic +.zot ran.zat)
|
|
|
|
|
abet:une(hez.yar ?.(=(%into -.q.hic) hez.yar.une [~ hen]))
|
|
|
|
|
[mos ..^$]
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
As we've seen several times before, we set up a core for the local ship with
|
|
|
|
|
`++un`. We set up a core for the local desk with `++di` and updating our
|
|
|
|
|
subscribers with `++wake`. We call `++exem` to execute the merge. `++abet:de`,
|
|
|
|
|
`++pish:un` and `++abet:un` resolve all our changes.
|
|
|
|
|
|
|
|
|
|
The only new arm here is `++exem:de`.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ exem :: execute merge
|
|
|
|
|
|= [hen=duct wen=@da mer=mizu] :: aka direct change
|
|
|
|
|
?. (gte p.mer let.dom) !! :: no
|
|
|
|
|
=. +>.$ %= +>.$
|
|
|
|
|
hut.ran (~(uni by hut.r.mer) hut.ran)
|
|
|
|
|
lat.ran (~(uni by lat.r.mer) lat.ran)
|
|
|
|
|
let.dom p.mer
|
|
|
|
|
hit.dom (~(uni by q.mer) hit.dom)
|
|
|
|
|
==
|
|
|
|
|
=+ ^= hed :: head commit
|
|
|
|
|
=< q
|
|
|
|
|
%- ~(got by hut.ran)
|
|
|
|
|
%- ~(got by hit.dom)
|
|
|
|
|
let.dom
|
|
|
|
|
=. ank.dom :: real checkout
|
|
|
|
|
(~(checkout-ankh ze lim dom ran) hed)
|
|
|
|
|
(echa:wake hen wen mer) :: notify or w/e
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
We first do a quick sanity check that the head of the merge data is greater
|
|
|
|
|
than the head of the old data. Merges must add at least one revision.
|
|
|
|
|
|
|
|
|
|
We merge the new data in the obvious way. We do map merges for `hut` and `lat`
|
|
|
|
|
in rang to get all the new data and commits, we do a map merge in`hit` in our
|
|
|
|
|
dome to get all the new revision numbers, and we update our head to the most
|
|
|
|
|
recent revision.
|
|
|
|
|
|
|
|
|
|
Then, we checkout the commit at our head and announce the results to unix.
|
|
|
|
|
`++echa` is the only new arm here.
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
++ echa :: announce raw
|
|
|
|
|
|= [hen=duct wen=@da mer=mizu]
|
|
|
|
|
^+ +>
|
|
|
|
|
%= +>
|
|
|
|
|
vag ?~(hez vag :_(vag [u.hez [%ergo who syd let.dom]]))
|
|
|
|
|
==
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
If we have a sync duct, we tell unix that a new revision is available.
|
|
|
|
|
|
|
|
|
|
This concludes our discussion of a local merge.
|
|
|
|
|