Initial public release of hocker

This commit is contained in:
Parnell Springmeyer 2016-10-20 15:47:41 -05:00
commit 9b39e78744
51 changed files with 3647 additions and 0 deletions

20
.gitignore vendored Normal file
View File

@ -0,0 +1,20 @@
dist
dist-*
cabal-dev
*.o
*.hi
*.chi
*.chs.h
*.dyn_o
*.dyn_hi
.hpc
.hsenv
.cabal-sandbox/
cabal.sandbox.config
*.prof
*.aux
*.hp
*.eventlog
.stack-work/
cabal.project.local
throwaway/*

201
LICENSE Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2016 Awake Networks
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

96
README.md Normal file
View File

@ -0,0 +1,96 @@
# Welcome!
The `hocker` package provides a small set of utilities to fetch docker image
artifacts from docker registries and produce Nix derivations marrying docker and
Nix elegantly:
- [`hocker-image`](./hocker-image/README.md) for fetching a docker image
- [`hocker-layer`](./hocker-layer/README.md) for fetching a docker image's layers
- [`hocker-config`](./hocker-config/README.md) for fetching a docker image's configuration JSON
- [`hocker-manifest`](./hocker-manifest/README.md) for fetching docker registry image manifest
- [`docker2nix`](./docker2nix/README.md) for generating Nix expressions calling the `fetchdocker`
derivations, given a docker registry image manifest
These tools _only_ work with version 2 of the **docker registry** and **docker
(>=) v1.10**.
The motivation for this tool came from a need to fetch docker image artifacts
from a docker registry without the stock docker tooling designed to only work
with the docker daemon.
Our use-case (and the reason why this package exposes a `docker2nix` tool) was
the need to pull our docker images into a [NixOS system's store](https://nixos.org/nix/manual/#ch-about-nix) and load
those images from the store into the docker daemon running on that same system.
We desired this for two critical reasons:
1. The docker daemon no longer required an internet connection in order to pull
the docker images it needed
2. By virtue of fetching the docker images at build-time as opposed to run-time,
failures resulting in non-existent images or image tags we caught earlier
We strived to make this tool useful outside of the context of Nix and NixOS,
therefore all of these tools are usable without Nix in the workflow.
For high-level documentation of each utility, please refer to the README's in
each project's respective directory (links are in the above list).
## Quickstart
Let's first retrieve a docker registry image manifest for the `debian:jessie`
docker image (note that we need the `library/` repository prefix because we are
pulling from the official debian repository!):
```shell
$ hocker-manifest library/debian jessie
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1528,
"digest": "sha256:054abe38b1e6f863befa4258cbfaf127b1cc9440d2e2e349b15d22e676b591e7"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 52550276,
"digest": "sha256:cd0a524342efac6edff500c17e625735bbe479c926439b263bbe3c8518a0849c"
}
]
}
```
## Private Registries
We developed these tools with private registries in-mind and they currently
support three modes of authentication:
1. Nothing at all (simply do not supply `--token` or `--username` and
`--password`)
2. Bearer token-based authentication, you should retrieve a token and then give
it via the `--token` flag
3. Basic authentication with `--username` and `--password` (most common with
nginx proxied registries providing basic auth protection; you should be
careful to ensure you're only sending requests to registries exposed via TLS
or SSL!)
A caveat to #1 if you do not supply any authentication credential flags and you
also do not supply a `--registry` flag then the tools assume you wish to make a
request to the public docker hub registry, in which case they ask for a
short-lived authentication token from the registry auth server and then make the
request to the public docker hub registry.
# TODO
- [X] ~Get a nix-build workflow working for hocker~
- [ ] Work on a nix-shell based dev workflow
- [ ] Document types in `Exceptions`, `ErrorHandling`, etc.
- [x] ~Rename the `Types/Extra.hs` module, that's poorly named~ (I got rid of it)
- [x] ~Write an updated and accurate README introduction~
- [X] Rename `ContainerName` and `ContainerTag` to `ImageName` and `ImageTag` to
be more consistent with the correct docker terminology
- [x] ~Remove the run prefix from most of the `V1_2.hs` module functions~ (replaced with a `do` prefix)
- [X] ~Use HockerException in docker2nix's lib functions~
- [x] ~Better document the types and function signatures in `Nix/FetchDocker.hs`~
- [X] L258 fix docker-layer to hocker-layer
- [ ] Proofread comments
- [ ] `Data/Docker/Image/Types.hs` can probably move to a more general location
I think
- [ ] Use friendly module prefixing more consistently and cleanup usage
- [ ] Strip out the unused docker image V1 code

2
Setup.hs Normal file
View File

@ -0,0 +1,2 @@
import Distribution.Simple
main = defaultMain

46
config.nix Normal file
View File

@ -0,0 +1,46 @@
{ allowUnfree = true;
packageOverrides = pkgs: {
haskellPackages = pkgs.haskellPackages.override {
overrides = haskellPackagesNew: haskellPackagesOld: {
optparse-applicative =
pkgs.haskell.lib.dontCheck
(haskellPackagesNew.callPackage ./nix/optparse-applicative.nix { });
optparse-generic =
haskellPackagesNew.callPackage ./nix/optparse-generic.nix { };
turtle =
haskellPackagesNew.callPackage ./nix/turtle.nix { };
wreq =
haskellPackagesNew.callPackage ./nix/wreq.nix { };
http-client =
haskellPackagesNew.callPackage ./nix/http-client.nix { };
http-client-tls =
haskellPackagesNew.callPackage ./nix/http-client-tls.nix { };
hocker =
pkgs.haskell.lib.overrideCabal
( haskellPackagesNew.callPackage ./default.nix { } )
( oldDerivation: {
testToolDepends =
(oldDerivation.testToolDepends or []) ++[ pkgs.nix ];
buildDepends =
(oldDerivation.buildDepends or []) ++ [ pkgs.makeWrapper ];
postInstall =
(oldDerivation.postInstall or "") + ''
wrapProgram $out/bin/hocker-* \
--suffix PATH : ${pkgs.nix}/bin
wrapProgram $out/bin/docker2nix \
--suffix PATH : ${pkgs.nix}/bin
'';
}
);
};
};
};
}

39
default.nix Normal file
View File

@ -0,0 +1,39 @@
{ mkDerivation, aeson, aeson-pretty, ansi-wl-pprint, async, base
, bytestring, concurrentoutput, containers, cryptonite, data-fix
, deepseq, directory, exceptions, filepath, foldl, hnix
, http-client, http-types, lens, lens-aeson, lifted-base, memory
, mtl, neat-interpolation, network, network-uri, optional-args
, optparse-applicative, optparse-generic, pooled-io, pureMD5
, scientific, stdenv, tar, tasty, tasty-golden, tasty-hunit
, tasty-quickcheck, tasty-smallcheck, temporary, text, time
, transformers, turtle, unordered-containers, uri-bytestring
, vector, wreq, zlib
}:
mkDerivation {
pname = "hocker";
version = "0.1.0.0";
src = ./.;
isLibrary = true;
isExecutable = true;
libraryHaskellDepends = [
aeson aeson-pretty ansi-wl-pprint async base bytestring
concurrentoutput containers cryptonite data-fix deepseq directory
exceptions filepath foldl hnix http-client http-types lens
lens-aeson lifted-base memory mtl neat-interpolation network
network-uri optional-args optparse-applicative optparse-generic
pooled-io pureMD5 scientific tar temporary text time transformers
turtle unordered-containers uri-bytestring vector wreq zlib
];
executableHaskellDepends = [
base bytestring cryptonite data-fix filepath hnix lens mtl network
optional-args optparse-applicative optparse-generic temporary text
];
testHaskellDepends = [
aeson ansi-wl-pprint base bytestring containers cryptonite mtl
network network-uri tasty tasty-golden tasty-hunit tasty-quickcheck
tasty-smallcheck text unordered-containers
];
homepage = "https://github.com/awakenetworks/hocker#readme";
description = "CLI tools and library to interact with a V2 Docker Registry";
license = stdenv.lib.licenses.asl20;
}

77
docker2nix/Main.hs Normal file
View File

@ -0,0 +1,77 @@
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE TypeOperators #-}
{-# OPTIONS -fno-warn-orphans #-}
-----------------------------------------------------------------------------
-- |
-- Module : docker2nix/Main
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Main where
import Data.ByteString.Lazy.Char8 as C8L
import Data.Maybe (fromMaybe)
import qualified Data.Text as T
import Options.Generic
import System.IO (hWaitForInput, stdin)
import Data.Docker.Image.Types
import Data.Docker.Nix.FetchDocker as Nix.FetchDocker
import Lib
import Network.Wreq.Docker.Registry.V2 as Docker.Registry
import Types
import Types.ImageName
import Types.ImageTag
-- | Top-level optparse-generic CLI args data type and specification.
data ProgArgs w = ProgArgs
{ -- | URI for the registry, optional
registry :: w ::: Maybe RegistryURI
<?> "URI of registry, defaults to the Docker Hub registry"
-- | Filepath to a file containing the manifest JSON
, manifest :: w ::: Maybe FilePath
<?> "Fetch image manifest from a path on the filesystem"
-- | Alternative docker image name made available in the Nix
-- expression fetchdocker derivation
, altImageName :: w ::: Maybe T.Text
<?> "Alternate image name provided in the `fetcdocker` derivation"
-- | Docker image name (includes the reponame, e.g: library/debian)
, name :: ImageName
-- | Docker image tag
, imageTag :: ImageTag
} deriving (Generic)
instance ParseRecord (ProgArgs Wrapped)
deriving instance Show (ProgArgs Unwrapped)
progSummary :: T.Text
progSummary = "Produce a Nix expression given a manifest for a docker image via stdin or via a filepath"
main :: IO ()
main = unwrapRecord progSummary >>= \ProgArgs{..} -> do
let (imageRepo, imageName) = Lib.splitImgName name
dockerRegistry = fromMaybe defaultRegistry registry
manifestJSON <-
case manifest of
Just f -> C8L.readFile f
Nothing -> do
let h = stdin
hWaitForInput h (-1)
C8L.hGetContents h
exprs <- Nix.FetchDocker.generate HockerImageMeta{..}
either (Lib.exitProgFail . show) Lib.pprintNixExpr exprs

62
docker2nix/README.md Normal file
View File

@ -0,0 +1,62 @@
# Generate nix expression to fetch a docker image
This tool takes a docker registry V2 image manifest JSON on stdin or as a file
to read from and generates a Nix expression that uses the fetchdocker machinery
to pull all individual layers and generate an image compositor that can stream
to `docker load`.
## Quickstart
```shell
$ docker2nix --help
Produce a Nix expression given a manifest for a docker image via stdin or via a
filepath
Usage: docker2nix [--registry URI] [--manifest STRING] [--altName TEXT]
IMAGE-NAME IMAGE-TAG
Available options:
-h,--help Show this help text
--registry URI URI of registry, defaults to the Docker Hub registry
--manifest STRING Fetch image manifest from a path on the filesystem
--altName TEXT Alternate image name provided in the `fetcdocker`
derivation
IMAGE-NAME Docker image name, e.g: 'debian' in debian:jessie
IMAGE-TAG Docker image tag identifier, e.g: 'jessie' in
debian:jessie
```
Generating a fetchdocker Nix expression from a docker registry V2 image manifest
JSON retrieved by `hocker-manifest`:
```shell
$ hocker-manifest library/debian jessie | docker2nix library/debian jessie
{
config.docker.images.debian = pkgs.fetchdocker {
name = "debian";
registry = "https://registry-1.docker.io/v2/";
repository = "library";
imageName = "debian";
tag = "jessie";
imageConfig = pkgs.fetchDockerConfig {
inherit registry repository imageName tag;
sha256 = "1rwinmvfc8jxn54y7qnj82acrc97y7xcnn22zaz67y76n4wbwjh5";
};
imageLayers = let
layer0 = pkgs.fetchDockerLayer {
inherit registry repository imageName tag;
layerDigest = "cd0a524342efac6edff500c17e625735bbe479c926439b263bbe3c8518a0849c";
sha256 = "1744l0c8ag5y7ck9nhr6r5wy9frmaxi7xh80ypgnxb7g891m42nd";
};
in [ layer0 ];
};
}
```
And to load a fetched docker image into a running docker daemon on a NixOS
system (NB the preferred method to do the below might be in a systemd unit with
the `config.docker.images.debian` attribute parametrizing the path to
`compositeImage.sh`):
```shell
$ /nix/store/6qn5i7p6x3c3qylvzqf76fqgd0gl47cv-debian/compositeImage.sh | docker load
```

41
hocker-config/Main.hs Normal file
View File

@ -0,0 +1,41 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedLists #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ScopedTypeVariables #-}
-----------------------------------------------------------------------------
-- |
-- Module : hocker-config/Main
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Main where
import Data.Maybe (fromMaybe)
import qualified Data.Text
import Options.Generic
import Lib
import Network.Wreq.Docker.Image.V1_2 as Docker.Image
import Network.Wreq.Docker.Registry.V2
import Types
progSummary :: Data.Text.Text
progSummary = "Fetch a docker image config JSON from the registry"
main :: IO ()
main = unwrapRecord progSummary >>= \OptArgs{..} -> do
let dockerRegistry = fromMaybe defaultRegistry registry
auth <- mkAuth dockerRegistry imageName credentials
config <- Docker.Image.fetchConfig $
HockerMeta
{ outDir = Nothing
, imageLayer = Nothing
, ..
}
either (Lib.exitProgFail . show) (Lib.writeOrPrint out) config

107
hocker-config/README.md Normal file
View File

@ -0,0 +1,107 @@
# Retrieve a docker image configuration JSON
This tool fetches the specified docker image's configuration JSON from the
docker registry.
## Quickstart
```shell
Fetch a docker image config JSON from the registry
Usage: hocker-config [--registry URI] ([-u|--username BASIC USERNAME]
[-p|--password BASIC PASSWORD] | [-t|--token BEARER TOKEN])
[--out STRING] IMAGE-NAME IMAGE-TAG
Available options:
-h,--help Show this help text
--registry URI URI of registry, defaults to the Docker Hub registry
-u,--username BASIC USERNAME
Username part of a basic auth credential
-p,--password BASIC PASSWORD
Password part of a basic auth credential
-t,--token BEARER TOKEN Bearer token retrieved from a call to `docker login`
(mutually exclusive to --username and --password)
--out STRING Write content to location
IMAGE-NAME Docker image name, e.g: 'debian' in debian:jessie
IMAGE-TAG Docker image tag identifier, e.g: 'jessie' in
debian:jessie
```
```shell
$ hocker-config library/debian jessie | jq
{
"architecture": "amd64",
"config": {
"Hostname": "200591939db7",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/bash"
],
"ArgsEscaped": true,
"Image": "sha256:9e77974e778cc6730c21889f33f2dcb141f9f632745ba2c3914dd62250ea93c9",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"container": "9a3fb25551fee47cea1203cbc2a6022dc3ffea8bc2010733e1286c4702cdf778",
"container_config": {
"Hostname": "200591939db7",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/bash\"]"
],
"ArgsEscaped": true,
"Image": "sha256:9e77974e778cc6730c21889f33f2dcb141f9f632745ba2c3914dd62250ea93c9",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"created": "2017-05-08T23:28:15.327579341Z",
"docker_version": "17.04.0-ce",
"history": [
{
"created": "2017-05-08T23:28:14.437236885Z",
"created_by": "/bin/sh -c #(nop) ADD file:f4e6551ac34ab446a297849489a5693d67a7e76c9cb9ed9346d82392c9d9a5fe in / "
},
{
"created": "2017-05-08T23:28:15.327579341Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
"empty_layer": true
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:8d4d1ab5ff74fc361fb74212fff3b6dc1e6c16d1e1f0e8b44f9a9112b00b564f"
]
}
}
```

44
hocker-image/Main.hs Normal file
View File

@ -0,0 +1,44 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedLists #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ScopedTypeVariables #-}
-----------------------------------------------------------------------------
-- |
-- Module : hocker-fetch/Main
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Main where
import Data.Maybe (fromMaybe)
import qualified Data.Text
import Data.Text.IO as TIO
import Options.Generic
import System.IO.Temp as Tmp
import Lib
import Network.Wreq.Docker.Image.V1_2 as Docker.Image
import Network.Wreq.Docker.Registry.V2
import Types
progSummary :: Data.Text.Text
progSummary = "Fetch a docker image from a docker registry without using docker"
main :: IO ()
main = unwrapRecord progSummary >>= \OptArgs{..} -> do
let dockerRegistry = fromMaybe defaultRegistry registry
auth <- mkAuth dockerRegistry imageName credentials
img <- withSystemTempDirectory "hocker-image-XXXXXX" $ \d ->
Docker.Image.fetchAndAssemble $
HockerMeta
{ outDir = Just d
, imageLayer = Nothing
, ..
}
either (Lib.exitProgFail . show) TIO.putStrLn img

31
hocker-image/README.md Normal file
View File

@ -0,0 +1,31 @@
# Fetch a docker image
## Quickstart
```shell
Fetch a docker image from a docker registry without using docker
Usage: hocker-image [--registry URI] ([-u|--username BASIC USERNAME]
[-p|--password BASIC PASSWORD] | [-t|--token BEARER TOKEN])
[--out STRING] IMAGE-NAME IMAGE-TAG
Available options:
-h,--help Show this help text
--registry URI URI of registry, defaults to the Docker Hub registry
-u,--username BASIC USERNAME
Username part of a basic auth credential
-p,--password BASIC PASSWORD
Password part of a basic auth credential
-t,--token BEARER TOKEN Bearer token retrieved from a call to `docker login`
(mutually exclusive to --username and --password)
--out STRING Write content to location
IMAGE-NAME Docker image name, e.g: 'debian' in debian:jessie
IMAGE-TAG Docker image tag identifier, e.g: 'jessie' in
debian:jessie
```
```shell
$ hocker-image --out=./debian-latest.tar.gz library/debian latest
Downloading layer: 22def84 => decompressed => wrote 159fbd8
./debian-latest.tar.gz
```

72
hocker-layer/Main.hs Normal file
View File

@ -0,0 +1,72 @@
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedLists #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE TypeOperators #-}
-----------------------------------------------------------------------------
-- |
-- Module : hocker-layer/Main
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Main where
import qualified Crypto.Hash as Hash
import Data.Maybe (fromMaybe)
import qualified Data.Text
import Options.Generic
import Lib
import Network.Wreq.Docker.Image.V1_2 as Docker.Image
import Network.Wreq.Docker.Registry.V2
import Types
import Types.Hash ()
import Types.ImageName
import Types.ImageTag
import Types.URI ()
data ProgArgs w = ProgArgs
{ -- | URI for the registry, optional
registry :: w ::: Maybe RegistryURI
<?> "URI of registry, defaults to the Docker Hub registry"
, credentials :: Maybe Credentials
-- | Filesystem path to write output to
, out :: w ::: Maybe FilePath
<?> "Write content to location"
-- | Layer sha256 hash digest to fetch from registry
, imageLayer :: w ::: Hash.Digest Hash.SHA256
<?> "Layer to fetch, by hash digest (unprefixed by the hash algorithm identifier)"
-- | Docker image name (includes the repository, e.g: library/debian)
, imageName :: ImageName
-- | Docker image tag
, imageTag :: ImageTag
} deriving (Generic)
instance ParseRecord (ProgArgs Wrapped)
deriving instance Show (ProgArgs Unwrapped)
progSummary :: Data.Text.Text
progSummary = "Fetch a docker image layer from a docker registry without using docker"
main :: IO ()
main = unwrapRecord progSummary >>= \ProgArgs{..} -> do
let dockerRegistry = fromMaybe defaultRegistry registry
auth <- mkAuth dockerRegistry imageName credentials
layerPath <- Docker.Image.fetchLayer $
HockerMeta
{ outDir = Nothing
, imageLayer = Just imageLayer
, ..
}
either (Lib.exitProgFail . show) Prelude.putStrLn layerPath

27
hocker-layer/README.md Normal file
View File

@ -0,0 +1,27 @@
# Retrieve an individual docker image layer
## Quickstart
```shell
Fetch a docker image layer from a docker registry without using docker
Usage: hocker-layer [--registry URI] ([-u|--username BASIC USERNAME]
[-p|--password BASIC PASSWORD] | [-t|--token BEARER TOKEN])
[--out STRING] (-l|--layer SHA256) IMAGE-NAME IMAGE-TAG
Available options:
-h,--help Show this help text
--registry URI URI of registry, defaults to the Docker Hub registry
-u,--username BASIC USERNAME
Username part of a basic auth credential
-p,--password BASIC PASSWORD
Password part of a basic auth credential
-t,--token BEARER TOKEN Bearer token retrieved from a call to `docker login`
(mutually exclusive to --username and --password)
--out STRING Write content to location
-l,--layer SHA256 Layer to fetch, by hash digest (unprefixed by the
hash algorithm identifier)
IMAGE-NAME Docker image name, e.g: 'debian' in debian:jessie
IMAGE-TAG Docker image tag identifier, e.g: 'jessie' in
debian:jessie
```

41
hocker-manifest/Main.hs Normal file
View File

@ -0,0 +1,41 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedLists #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ScopedTypeVariables #-}
-----------------------------------------------------------------------------
-- |
-- Module : hocker-manifest/Main
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Main where
import Data.Maybe (fromMaybe)
import qualified Data.Text
import Options.Generic
import Lib
import Network.Wreq.Docker.Image.V1_2 as Docker.Image
import Network.Wreq.Docker.Registry.V2
import Types
progSummary :: Data.Text.Text
progSummary = "Pull a docker image manifest from the registry"
main :: IO ()
main = unwrapRecord progSummary >>= \OptArgs{..} -> do
let dockerRegistry = fromMaybe defaultRegistry registry
auth <- mkAuth dockerRegistry imageName credentials
manifest <- Docker.Image.fetchImageManifest $
HockerMeta
{ outDir = Nothing
, imageLayer = Nothing
, ..
}
either (Lib.exitProgFail . show) (Lib.writeOrPrint out) manifest

52
hocker-manifest/README.md Normal file
View File

@ -0,0 +1,52 @@
# Retrieve a docker registry V2 image manifest
This utility retrieves a V2 docker image manifest from the docker registry.
NB: the V2 docker image manifest retrieved from the docker registry is a
manifest of the configuration JSON and layer blobs stored by the registry, this
is _not_ the same manifest JSON file of the docker image V1.2 _image_
specification.
## Quickstart
```shell
Pull a docker image manifest from the registry
Usage: hocker-manifest [--registry URI] ([-u|--username BASIC USERNAME]
[-p|--password BASIC PASSWORD] |
[-t|--token BEARER TOKEN]) [--out STRING] IMAGE-NAME
IMAGE-TAG
Available options:
-h,--help Show this help text
--registry URI URI of registry, defaults to the Docker Hub registry
-u,--username BASIC USERNAME
Username part of a basic auth credential
-p,--password BASIC PASSWORD
Password part of a basic auth credential
-t,--token BEARER TOKEN Bearer token retrieved from a call to `docker login`
(mutually exclusive to --username and --password)
--out STRING Write content to location
IMAGE-NAME Docker image name, e.g: 'debian' in debian:jessie
IMAGE-TAG Docker image tag identifier, e.g: 'jessie' in
debian:jessie
```
```shell
hocker-manifest library/debian jessie
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1528,
"digest": "sha256:3e83c23dba6a16cd936a3dc044df71b26706c5a4c28181bc3ca4a4af9f5f38ee"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 52584016,
"digest": "sha256:10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9"
}
]
}
```

220
hocker.cabal Normal file
View File

@ -0,0 +1,220 @@
name: hocker
version: 0.1.0.0
synopsis: CLI tools and library to interact with a V2 Docker Registry
description: Please see README.md
homepage: https://github.com/awakenetworks/hocker#readme
license: Apache-2.0
license-file: LICENSE
author: Awake networks
maintainer: opensource@awakenetworks.com
copyright: 2016 Awake Networks
category: Web
build-type: Simple
extra-source-files: LICENSE
cabal-version: >=1.10
library
ghc-options: -Wall
hs-source-dirs: src
exposed-modules:
Lib,
Types,
Types.URI,
Types.Hash,
Types.ImageName,
Types.ImageTag,
Types.Exceptions,
Data.Docker.Nix,
Data.Docker.Nix.Lib,
Data.Docker.Nix.FetchDocker,
Data.Docker.Image.Types,
Data.Docker.Image.AesonHelpers,
Data.Docker.Image.V1.Layer,
Data.Docker.Image.V1.Types,
Data.Docker.Image.V1_2.Types,
Network.Wreq.ErrorHandling,
Network.Wreq.Docker.Registry.V2,
Network.Wreq.Docker.Image.V1_2,
Network.Wreq.Docker.Image.Lib
build-depends:
base >= 4.9 && < 5,
aeson >= 0.9.0.1,
ansi-wl-pprint >= 0.6.7.3,
lens-aeson >= 1.0,
async >= 2.0.0.0 && < 2.2,
exceptions >= 0.8,
text >= 1.2,
directory >= 1.2.2.0,
temporary >= 1.2,
pureMD5 >= 2.1,
vector >= 0.11,
optparse-generic >= 1.1.5,
optparse-applicative >= 0.13,
aeson-pretty >= 0.8,
filepath >= 1.4,
deepseq >= 1.4,
lens >= 4.0,
mtl >= 2.2,
transformers >= 0.4,
lifted-base >= 0.2.3.8,
zlib >= 0.6,
http-types >= 0.9.1,
http-client >= 0.4,
tar >= 0.5,
network >= 2.6,
scientific >= 0.3,
pooled-io >= 0.0.2,
concurrentoutput >= 0.2,
bytestring >= 0.10,
uri-bytestring >= 0.2,
unordered-containers >= 0.2,
containers >= 0.5,
memory >= 0.11,
turtle >= 1.3,
cryptonite >= 0.13,
foldl >= 1.0,
time >= 1.4,
network-uri >= 2.6,
wreq >= 0.4,
data-fix >= 0.0.3,
hnix >= 0.3.4,
neat-interpolation >= 0.3.2,
optional-args
default-language: Haskell2010
executable hocker-image
hs-source-dirs: hocker-image
main-is: Main.hs
ghc-options: -threaded -rtsopts -with-rtsopts=-N -Wall
build-depends:
base >= 4.9 && < 5,
text >= 1.2,
lens >= 4.0,
optparse-generic >= 1.1.5,
temporary >= 1.2,
optparse-applicative >= 0.13,
filepath >= 1.4,
mtl >= 2.2,
network >= 2.6,
bytestring >= 0.10,
optional-args,
hocker
default-language: Haskell2010
executable hocker-layer
hs-source-dirs: hocker-layer
main-is: Main.hs
ghc-options: -threaded -rtsopts -with-rtsopts=-N -Wall
build-depends:
base >= 4.9 && < 5,
text >= 1.2,
lens >= 4.0,
optparse-generic >= 1.1.5,
temporary >= 1.2,
optparse-applicative >= 0.13,
filepath >= 1.4,
mtl >= 2.2,
network >= 2.6,
bytestring >= 0.10,
cryptonite >= 0.13,
optional-args,
hocker
default-language: Haskell2010
executable hocker-config
hs-source-dirs: hocker-config
main-is: Main.hs
ghc-options: -threaded -rtsopts -with-rtsopts=-N -Wall
build-depends:
base >= 4.9 && < 5,
text >= 1.2,
lens >= 4.0,
optparse-generic >= 1.1.5,
temporary >= 1.2,
optparse-applicative >= 0.13,
filepath >= 1.4,
mtl >= 2.2,
network >= 2.6,
bytestring >= 0.10,
optional-args,
hocker
default-language: Haskell2010
executable hocker-manifest
hs-source-dirs: hocker-manifest
main-is: Main.hs
ghc-options: -threaded -rtsopts -with-rtsopts=-N -Wall
build-depends:
base >= 4.9 && < 5,
text >= 1.2,
lens >= 4.0,
optparse-generic >= 1.1.5,
temporary >= 1.2,
optparse-applicative >= 0.13,
filepath >= 1.4,
mtl >= 2.2,
network >= 2.6,
bytestring >= 0.10,
optional-args,
hocker
default-language: Haskell2010
executable docker2nix
hs-source-dirs: docker2nix
main-is: Main.hs
ghc-options: -threaded -rtsopts -with-rtsopts=-N -Wall
build-depends:
base >= 4.9 && < 5,
text >= 1.2,
lens >= 4.0,
optparse-generic >= 1.1.5,
temporary >= 1.2,
optparse-applicative >= 0.13,
filepath >= 1.4,
mtl >= 2.2,
network >= 2.6,
bytestring >= 0.10,
hnix >= 0.3.4,
data-fix >= 0.0.3,
optional-args,
hocker
test-suite hocker-tests
type: exitcode-stdio-1.0
hs-source-dirs: test
main-is: Main.hs
other-modules:
Tests.Data.Docker.Image.V1,
Tests.Data.Docker.Image.V1_2,
Tests.Data.Docker.Nix.FetchDocker
build-depends:
base >= 4.9 && < 5,
aeson >= 0.9.0.1,
tasty >= 0.11,
tasty-hunit >= 0.9,
text >= 1.2,
network >= 2.6,
network-uri >= 2.6,
ansi-wl-pprint >= 0.6.7.3,
unordered-containers >= 0.2,
tasty-quickcheck >= 0.8,
tasty-smallcheck >= 0.8,
tasty-golden >= 2.3,
mtl >= 2.2,
bytestring >= 0.10,
cryptonite >= 0.13,
containers >= 0.5,
hocker
ghc-options: -threaded -rtsopts -with-rtsopts=-N
default-language: Haskell2010
source-repository head
type: git
location: https://github.com/awakenetworks/hocker

19
nix/http-client-tls.nix Normal file
View File

@ -0,0 +1,19 @@
{ mkDerivation, base, bytestring, case-insensitive, connection
, cryptonite, data-default-class, exceptions, hspec, http-client
, http-types, memory, network, stdenv, tls, transformers
}:
mkDerivation {
pname = "http-client-tls";
version = "0.3.3";
sha256 = "0r50h7lhrwmxcmiq5nw1rxnpda3k6mhz4jsd86m56ymai5lnf77c";
libraryHaskellDepends = [
base bytestring case-insensitive connection cryptonite
data-default-class exceptions http-client http-types memory network
tls transformers
];
testHaskellDepends = [ base hspec http-client http-types ];
doCheck = false;
homepage = "https://github.com/snoyberg/http-client";
description = "http-client backend using the connection package and tls library";
license = stdenv.lib.licenses.mit;
}

27
nix/http-client.nix Normal file
View File

@ -0,0 +1,27 @@
{ mkDerivation, array, async, base, base64-bytestring
, blaze-builder, bytestring, case-insensitive, containers, cookie
, deepseq, directory, exceptions, filepath, ghc-prim, hspec
, http-types, mime-types, monad-control, network, network-uri
, random, stdenv, streaming-commons, text, time, transformers, zlib
}:
mkDerivation {
pname = "http-client";
version = "0.5.6.1";
sha256 = "1v9bdb8dkhb5g6jl9azk86ig7ia8xh9arr64n7s8r94fp0vl6c1c";
libraryHaskellDepends = [
array base base64-bytestring blaze-builder bytestring
case-insensitive containers cookie deepseq exceptions filepath
ghc-prim http-types mime-types network network-uri random
streaming-commons text time transformers
];
testHaskellDepends = [
async base base64-bytestring blaze-builder bytestring
case-insensitive containers deepseq directory hspec http-types
monad-control network network-uri streaming-commons text time
transformers zlib
];
doCheck = false;
homepage = "https://github.com/snoyberg/http-client";
description = "An HTTP client engine";
license = stdenv.lib.licenses.mit;
}

View File

@ -0,0 +1,15 @@
{ mkDerivation, ansi-wl-pprint, base, process, QuickCheck, stdenv
, transformers, transformers-compat
}:
mkDerivation {
pname = "optparse-applicative";
version = "0.13.0.0";
sha256 = "1b0c5fdq8bd070g24vrjrwlq979r8dk8mys6aji9hy1l9pcv3inf";
libraryHaskellDepends = [
ansi-wl-pprint base process transformers transformers-compat
];
testHaskellDepends = [ base QuickCheck ];
homepage = "https://github.com/pcapriotti/optparse-applicative";
description = "Utilities and combinators for parsing command line options";
license = stdenv.lib.licenses.bsd3;
}

14
nix/optparse-generic.nix Normal file
View File

@ -0,0 +1,14 @@
{ mkDerivation, base, bytestring, optparse-applicative, semigroups
, stdenv, system-filepath, text, time, transformers, void
}:
mkDerivation {
pname = "optparse-generic";
version = "1.1.5";
sha256 = "1xg6c7h6h8q64gwskh7l4h7qn7w4y0ixf88grgk23xdficgmsyms";
libraryHaskellDepends = [
base bytestring optparse-applicative semigroups system-filepath
text time transformers void
];
description = "Auto-generate a command-line parser for your datatype";
license = stdenv.lib.licenses.bsd3;
}

11
nix/shell.nix Normal file
View File

@ -0,0 +1,11 @@
{ ghc }:
let
config = import ../config.nix;
pkgs = import <nixpkgs> { inherit config; };
in with pkgs; pkgs.haskell.lib.buildStackProject {
inherit ghc;
name = "hocker-stack-shell";
buildInputs = [
zlib cabal-install
];
}

20
nix/turtle.nix Normal file
View File

@ -0,0 +1,20 @@
{ mkDerivation, ansi-wl-pprint, async, base, bytestring, clock
, directory, doctest, foldl, hostname, managed, optional-args
, optparse-applicative, process, stdenv, stm, system-fileio
, system-filepath, temporary, text, time, transformers, unix
, unix-compat
}:
mkDerivation {
pname = "turtle";
version = "1.3.1";
sha256 = "0pnxislwq6vzllrlva9la9wisvz54gb74n3nprw145rxszw0ag93";
libraryHaskellDepends = [
ansi-wl-pprint async base bytestring clock directory foldl hostname
managed optional-args optparse-applicative process stm
system-fileio system-filepath temporary text time transformers unix
unix-compat
];
testHaskellDepends = [ base doctest ];
description = "Shell programming, Haskell-style";
license = stdenv.lib.licenses.bsd3;
}

34
nix/wreq.nix Normal file
View File

@ -0,0 +1,34 @@
{ mkDerivation, aeson, aeson-pretty, attoparsec, authenticate-oauth
, base, base16-bytestring, base64-bytestring, byteable, bytestring
, case-insensitive, containers, cryptohash, directory, doctest
, exceptions, filepath, ghc-prim, hashable, http-client
, http-client-tls, http-types, HUnit, lens, lens-aeson, mime-types
, network-info, psqueues, QuickCheck, snap-core, snap-server
, stdenv, template-haskell, temporary, test-framework
, test-framework-hunit, test-framework-quickcheck2, text, time
, time-locale-compat, transformers, unix-compat
, unordered-containers, uuid, vector
}:
mkDerivation {
pname = "wreq";
version = "0.5.0.1";
sha256 = "138n138rczs5xb7pr25b5a2ajhhxph7vfrh02x71w2alh2xr4akc";
libraryHaskellDepends = [
aeson attoparsec authenticate-oauth base base16-bytestring byteable
bytestring case-insensitive containers cryptohash exceptions
ghc-prim hashable http-client http-client-tls http-types lens
lens-aeson mime-types psqueues template-haskell text time
time-locale-compat unordered-containers
];
testHaskellDepends = [
aeson aeson-pretty base base64-bytestring bytestring
case-insensitive containers directory doctest filepath hashable
http-client http-types HUnit lens lens-aeson network-info
QuickCheck snap-core snap-server temporary test-framework
test-framework-hunit test-framework-quickcheck2 text time
transformers unix-compat unordered-containers uuid vector
];
homepage = "http://www.serpentine.com/wreq";
description = "An easy-to-use HTTP client library";
license = stdenv.lib.licenses.bsd3;
}

13
release.nix Normal file
View File

@ -0,0 +1,13 @@
let config = import ./config.nix;
in
{ pkgs ? import <nixpkgs> { inherit config; } }:
let
darwinPkgs = import <nixpkgs> { inherit config; system = "x86_64-darwin"; };
linuxPkgs = import <nixpkgs> { inherit config; system = "x86_64-linux" ; };
pkgs = import <nixpkgs> { inherit config; };
in
{ hocker-linux = linuxPkgs.haskellPackages.hocker;
hocker-darwin = darwinPkgs.haskellPackages.hocker;
hocker = pkgs.haskellPackages.hocker;
}

View File

@ -0,0 +1,18 @@
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Image.AesonHelpers
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Data.Docker.Image.AesonHelpers where
import Data.Aeson
import Data.Aeson.TH
-- | Produce a default option record with @omitNothingFields@ set to
-- True by default.
stdOpts :: Options
stdOpts = defaultOptions{ omitNothingFields = True }

View File

@ -0,0 +1,52 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TupleSections #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Image.Types
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Data.Docker.Image.Types where
import Data.ByteString.Lazy.Char8 as C8L
import qualified Data.Text as T
import Types
import Types.ImageTag
-- | Record of all the metadata we need for a docker image; this
-- includes the basics like registry location, image repository name,
-- image name, image tag, a possible alternative image name, and
-- finally the full manifest JSON for the docker image from which a
-- complete image can be constructed (supplying the config JSON and
-- references to all of the layers).
data HockerImageMeta = HockerImageMeta
{ -- | Docker image repo, the first part of a repository+name
-- separated by a "/"; e.g: library/debian.
imageRepo :: RepoNamePart
-- | Docker image name, the second part of a repository+name
-- separated by a "/"; e.g: library/debian.
, imageName :: ImageNamePart
-- | Docker image tag
, imageTag :: ImageTag
-- | A docker image manifest JSON blob as usually fetched from a
-- docker registry.
--
-- TODO: switch this to the JSON AST type?
, manifestJSON :: C8L.ByteString
-- | The URI (even if the default public registry) of the docker
-- registry.
, dockerRegistry :: RegistryURI
-- | An alternative name for the docker image provided in the
-- output Nix `fetchdocker` derivation expressions. Not replacing
-- @imageName@ but providing a method for declaring up-front a
-- possibly cleaner or more intuitive name for use within Nix.
, altImageName :: Maybe T.Text
} deriving (Show)

View File

@ -0,0 +1,249 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ViewPatterns #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Image.V1.Layer
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
--
-- Many of these functions are named after their equivalent functions
-- in the docker Golang source code.
--
-- <https://github.com/docker/docker/blob/6e8a2cd29113896acfc3f97a43dd27f751f2f519/layer/layer.go layer.go>
----------------------------------------------------------------------------
module Data.Docker.Image.V1.Layer where
import Control.Monad
import Data.Aeson
import qualified Data.ByteString.Lazy.Char8 as CL8
import Data.Coerce
import Data.Docker.Image.V1.Types
import Data.Foldable
import qualified Data.HashMap.Strict as H
import Data.Monoid
import Data.Sequence as Seq
import Data.Sequence.Lens
import qualified Data.Text as T
import Lib
type Parent = ChainID
type TopLayerJSON = Data.Aeson.Object
-- | Produce a @ChainID@ using a sequence of layer @DiffIDs@.
--
-- <https://github.com/docker/docker/blob/6e8a2cd29113896acfc3f97a43dd27f751f2f519/layer/layer.go#L239 layer.CreateChainID>
createChainID :: Seq DiffID -- ^ A sequence of layer @DiffID@s, (usually) fetched from the image's config JSON.
-> Maybe ChainID
createChainID = createChainIDFromParent Nothing
-- | Produce a @ChainID@ given the @ChainID@ of a parent layer and a
-- sequence of layer @DiffIDs@.
--
-- <https://github.com/docker/docker/blob/6e8a2cd29113896acfc3f97a43dd27f751f2f519/layer/layer.go#L243 layer.createChainIDFromParent>
createChainIDFromParent :: Maybe Parent -- ^ Previous (parent) @ChainID@ in the sequence used to produce the next @ChainID@.
-> Seq DiffID -- ^ A sequence of layer @DiffID@s, (usually) fetched from the image's config JSON.
-> Maybe ChainID
createChainIDFromParent parent (Seq.viewl -> EmptyL) = parent
createChainIDFromParent parent (Seq.viewl -> h :< rest) =
createChainIDFromParent (maybe root layer parent) rest
where
root = Just $ coerce h
layer = Just . flip chainDigest h
createChainIDFromParent parent _ = parent
-- | Produce a @ChainID@ given a parent @ChainID@ and a layer
-- @DiffID@.
chainDigest :: Parent -- ^ Parent @ChainID@ used to produce a child @ChainID@.
-> DiffID -- ^ Layer @DiffID@.
-> ChainID
chainDigest (show -> c) (show -> d) = ChainID .
Lib.sha256 . CL8.pack $ concat [c, " ", d]
-- | Produce a sequence of @ChainID@s from a sequence of layer
-- @DiffID@s.
--
-- <https://github.com/docker/docker/blob/b826bebda0cff2cc2d3083b954c810d2889eefe5/image/tarexport/save.go#L242 save.saveImage>
chainIDSequence :: Seq DiffID
-> Seq (Maybe ChainID)
chainIDSequence diffIDSeq = mapWithIndex chainIDSlice diffIDSeq
where
chainIDSlice (succ -> i) _ =
createChainID $ seqOf (slicedTo i) diffIDSeq
-- | Produce a sequence of unwrapped Just's from a sequence of
-- Maybe's.
squishMaybe :: MonadPlus m => m (Maybe a) -> m a
squishMaybe = join . fmap adapt
where
adapt Nothing = mzero
adapt (Just x) = return x
-- | Produce layer content ID hashes given an empty JSON config with
-- the layer's @ChainID@ injected as the value of the `layer_id` key
-- and, if not the base layer, the previous @ContentID@ injected as
-- the value of the `parent` key.
--
-- The JSON that is encoded *must* be in the canonical format
-- specified by Docker, please see @Lib.encodeCanonical@ for a
-- convenience function to encode an @Aeson.Value@ satisfying those
-- rules.
contentIDSequence :: Seq ChainID -- ^ A sequence of @ChainID@s, please see @chainIDSequence@.
-> TopLayerJSON -- ^ Config JSON paired with the top-most layer of the image.
-> Seq ContentID
contentIDSequence cids fj = foldl' (contentIDFold fj $ Seq.length cids) Seq.empty cids
-- | A folding function given to @foldl'@. This function computes the
-- @ContentID@'s for each layer using the last computed @ContentID@ as
-- the parent @ContentID@ for each iteration.
--
-- The first two arguments are closed over before being fed to
-- @foldl'@ producing a partial function that satisfies @foldl'@'s
-- first argument type signature.
contentIDFold :: TopLayerJSON -- ^ Config JSON to be hashed with the top-most layer of the image.
-> Int -- ^ Length of the @ChainID@ sequence being folded over.
-> Seq ContentID -- ^ The sequence of @ContentID@s accumulated.
-> ChainID -- ^ The @ChainID@ for producing a @ContentID@.
-> Seq ContentID
contentIDFold _ _ acc@(Seq.viewr -> EmptyR) chainid =
acc |> hashContent Nothing chainid emptyLayerJSON
contentIDFold topLayerJSON ln acc@(Seq.viewr -> _ :> parent) chainid =
acc |> hashedContentID
where
-- Check to see if we're at the end of the sequence we're folding
-- over, if so then hash the content using the top-layer config
-- JSON instead of the empty JSON
hashedContentID =
if ln == (succ $ Seq.length acc)
then hashContent (Just parent) chainid topLayerJSON
else hashContent (Just parent) chainid emptyLayerJSON
contentIDFold _ _ acc chainid =
acc |> hashContent Nothing chainid emptyLayerJSON
-- | Produce a @ContentID@, given a parent and a @ChainID@, builds the
-- empty JSON object with those two values and encodes it following
-- the canonical JSON rules.
hashContent :: Maybe ContentID -- ^ Parent @ContentID@ for injection into the hashing JSON.
-> ChainID -- ^ @ChainID@ to be hashed with the hashing JSON.
-> Data.Aeson.Object -- ^ Aeson AST to be canonically encoded; this can be either the ephemeral JSON or the config JSON.
-> ContentID
hashContent p c jsn = mk $ ephemeralHashableLayerJSON p c jsn
where
mk = ContentID . Lib.sha256 . Lib.encodeCanonical
-- | @emptyLayerJSON@ produces "empty" JSON for use in layer content
-- hashing.
--
-- The Aeson instances for @ContentID@, @DiffID@, and @ChainID@ will
-- correctly output a hex serialization of the SHA256 digest and
-- prefix it with "sha256:", which is necessary to correctly hash the
-- layer config in the same way that Docker's Golang code does it.
--
-- NB: I've manually assembled this in the "canonical order" it needs
-- to be in, in order to correctly hash the JSON string. There is also
-- a custom Aeson pretty printing function that serializes ADTs into
-- the canonical form and should make this function moot once an
-- appropriate ADT is in place.
--
-- TODO: codify this as an ADT to get rid of this manual construction
-- and make things clearer. For now, the manually constructed one is
-- fine (to get things working).
emptyLayerJSON :: Data.Aeson.Object
emptyLayerJSON = H.fromList
[ "container_config" .= object
[ "Hostname" .= ("" :: String)
, "Domainname" .= ("" :: String) -- NB: this one isn't cased like the others :(
, "User" .= ("" :: String)
, "AttachStdin" .= False
, "AttachStdout" .= False
, "AttachStderr" .= False
, "Tty" .= False
, "OpenStdin" .= False
, "StdinOnce" .= False
, "Env" .= (Nothing :: Maybe String)
, "Cmd" .= (Nothing :: Maybe String)
, "Image" .= ("" :: String)
-- This is a object with significant keys and empty values
-- (don't ask me why)
, "Volumes" .= (Nothing :: Maybe Data.Aeson.Value)
, "WorkingDir" .= ("" :: String)
, "Entrypoint" .= (Nothing :: Maybe String)
, "OnBuild" .= (Nothing :: Maybe String)
, "Labels" .= (Nothing :: Maybe [String])
]
-- This is the "canonical" empty timestamp
, "created" .= emptyTimeStamp
]
-- | Produce an "empty" JSON object given a parent and a
-- @ChainID@. This is used internally to produce the @ContentID@ hash
-- for a given layer.
ephemeralHashableLayerJSON :: Maybe ContentID -- ^ Parent @ContentID@, if Nothing, will not be included in the Aeson AST.
-> ChainID -- ^ @ChainID@ of the layer we're producing the @ContentID@ for.
-> Data.Aeson.Object -- ^ Aeson AST we want to inject the parent @ContentID@ and layer @ChainID@ into.
-> Data.Aeson.Value
ephemeralHashableLayerJSON parent layerid layerJSON =
Object $ layerJSON `H.union` H.fromList
([ "layer_id" .= layerid ] <> (maybeSingletonParent parent))
-- | Produce a layer JSON object given a parent, a @ContentID@, and an
-- Aeson Value Object. This function is different from
-- @ephemeralHashableLayerJSON@ in that its output is (later on)
-- written to the filesystem alongside the `layer.tar` file within the
-- directory named after the @ContentID@ hash.
permanentLayerJSON :: Maybe ContentID
-> ContentID
-> Data.Aeson.Object
-> Data.Aeson.Value
permanentLayerJSON parent layerContentId layerJSON =
Object $ layerJSON `H.union` H.fromList
([ "id" .= (mkPermHash layerContentId) ] <> maybeSingletonParent (mkPermHash <$> parent))
where
mkPermHash = Lib.stripHashId . T.pack . show
-- TODO: this should be parsed into an ADT, transformed algebraically
-- into what it should be, then re-encoded; instead of performing
-- Map-based operations on the AST. This was the quicker option though
-- for now; need to get something working first.
imageConfig2LayerConfig :: Data.Aeson.Object
-> Data.Aeson.Object
imageConfig2LayerConfig = H.filterWithKey keyWhitelist
where
keyWhitelist k _ = k `elem`
[ "container"
, "container_config"
, "docker_version"
, "config"
, "architecture"
, "os"
]
-- | Produce mempty if the parent is Nothing; if the parent is @Just
-- ContentID@ then it returns a singleton list with the expected
-- @Data.Aeson.Pair@ construction for the empty layer JSON.
--
-- The input argument is parameterized because the permanent JSON
-- config objects store hashes with the "sha256:" prefix stripped, but
-- the ephemeral JSON objects used to produce the Content ID hashes
-- want the "sha256:" prefix to be present!
maybeSingletonParent :: ToJSON a
=> Maybe a
-> [(T.Text, Data.Aeson.Value)]
maybeSingletonParent = maybe mempty (singletonList . ("parent" .=))
where
-- Alternatively - singleton v = [v]
singletonList = (: [])
-- | Produce the string "0001-01-01T00:00:00Z".
emptyTimeStamp :: String
emptyTimeStamp = "0001-01-01T00:00:00Z"

View File

@ -0,0 +1,108 @@
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Image.V1.Types
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Data.Docker.Image.V1.Types where
import qualified Crypto.Hash as Hash
import Data.Aeson
import Data.Aeson.Types
import qualified Data.ByteArray as BA
import qualified Data.ByteArray.Encoding as BA
import qualified Data.ByteString.Char8 as C8
import qualified Data.Text as T
import Data.Text.Encoding (encodeUtf8)
-- | Attempt to parse a @C8.ByteString@ into a @Hash.Digest
-- Hash.SHA256@.
--
-- A @Digest@ in Docker Golang-code parlance is the string hexadecimal
-- representation of a hashing function's digest with the hashing
-- function identifier prefixed onto the string. Right now they only
-- use SHA256 everywhere and also don't really do anything to
-- parameterize it.
--
-- There is a custom Show instance for this newtype to output a string
-- representation of the digest prefixed by its hashing function
-- identifier.
toDigest :: C8.ByteString -> Maybe (Hash.Digest Hash.SHA256)
toDigest = from . C8.break (== ':')
where
from ("sha256", r) = either (const Nothing) Hash.digestFromByteString . toBytes $ C8.tail r
from (_, _) = Nothing
toBytes :: C8.ByteString -> Either String BA.Bytes
toBytes = BA.convertFromBase BA.Base16
-- | A special kind of SHA256 hash digest identifying a layer by its
-- *content*. This value is a hash of an empty, canonicalized JSON
-- string with a "layer_id" (which is actually the layer's @ChainID@)
-- and possibly a parent ID (which is the previous-layer-in-sequence
-- @ContentID@).
newtype ContentID = ContentID (Hash.Digest Hash.SHA256)
deriving (Eq)
-- | A special kind of SHA256 digest identifying a specific sequence
-- of layers.
--
-- <https://github.com/docker/docker/blob/6e8a2cd29113896acfc3f97a43dd27f751f2f519/layer/layer.go#L60 layer.ChainID>
newtype ChainID = ChainID (Hash.Digest Hash.SHA256)
deriving (Eq)
-- | A special kind of a SHA256 digest identifying a layer by the
-- sha256 sum of the uncompressed layer tarball. "Diff" in this
-- context refers to the root filesystem contents of the tarball
-- identified by @DiffID@ representing the difference from the
-- previous layer.
--
-- <https://github.com/docker/docker/blob/6e8a2cd29113896acfc3f97a43dd27f751f2f519/layer/layer.go#L68 layer.DiffID>
newtype DiffID = DiffID (Hash.Digest Hash.SHA256)
deriving (Eq)
-- | Show a hexadecimal encoded SHA256 hash digest and prefix
-- "sha256:" to it.
showSHA :: Hash.Digest Hash.SHA256 -> String
showSHA = ("sha256:" ++) . show
instance Show ContentID where
show (ContentID d) = showSHA d
instance Show ChainID where
show (ChainID d) = showSHA d
instance Show DiffID where
show (DiffID d) = showSHA d
instance ToJSON ContentID where
toJSON v = String . T.pack $ show v
instance ToJSON ChainID where
toJSON v = String . T.pack $ show v
instance ToJSON DiffID where
toJSON v = String . T.pack $ show v
instance FromJSON ContentID where
parseJSON o@(String v) =
case toDigest $ encodeUtf8 v of
Just v' -> return $ ContentID v'
Nothing -> typeMismatch "SHA256 Digest" o
parseJSON inv = typeMismatch "SHA256 Digest" inv
instance FromJSON ChainID where
parseJSON o@(String v) =
case toDigest $ encodeUtf8 v of
Just v' -> return $ ChainID v'
Nothing -> typeMismatch "SHA256 Digest" o
parseJSON inv = typeMismatch "SHA256 Digest" inv
instance FromJSON DiffID where
parseJSON o@(String v) =
case toDigest $ encodeUtf8 v of
Just v' -> return $ DiffID v'
Nothing -> typeMismatch "SHA256 Digest" o
parseJSON inv = typeMismatch "SHA256 Digest" inv

View File

@ -0,0 +1,108 @@
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE PartialTypeSignatures #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TemplateHaskell #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Image.V1_2.Types
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
--
-- The types in this module are used to describe two specific pieces
-- of JSON within the v1.2 Docker Image spec: @manifest.json@ and
-- @repositories@.
----------------------------------------------------------------------------
module Data.Docker.Image.V1_2.Types where
import Data.Aeson
import Data.Aeson.TH
import Data.Aeson.Types
import Data.HashMap.Strict as H
import qualified Data.Text as T
import Data.Docker.Image.AesonHelpers
import Lib
----------------------------------------------------------------------------
--
-- Pretty-printed example of the `manifest.json` file.
{-
[
{
"Config": "3e83c23dba6a16cd936a3dc044df71b26706c5a4c28181bc3ca4a4af9f5f38ee.json",
"Layers": [
"10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9.tar"
],
"RepoTags": [
"library/debian:jessie"
]
}
]
-}
-- Pretty-printed example of the `repositories` json file.
{-
{
"library/debian": {
"jessie": "10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9"
}
}
-}
-- | A 'Text' representing a layer hash digest sourced from a docker
-- image's config JSON (different from the image's manifest JSON).
type RefLayer = T.Text
-- | A 'String' representing the full repository tag, e.g: @library/debian@.
type RepoTag = String
-- | Represents a v1.2 Docker Image manifest.
data ImageManifest = ImageManifest
{ -- | 'FilePath' within the image archive of the image's config
-- JSON
config :: FilePath
-- | List of image repository tags
, repoTags :: [T.Text]
-- | List of layers within the image archive named by their hash
-- digest and with the tar extension appended
, layers :: [FilePath]
} deriving (Show, Eq)
-- | Represents an object of 'ImageRepo's. The repository names are the
-- top-level keys and their value is an object who's keys are the tags
-- of the repository with the hash-value of the layer that tag
-- references.
data ImageRepositories = ImageRepositories [ImageRepo]
deriving (Show, Eq)
data ImageRepo = ImageRepo
{ -- | Repository tag
repo :: T.Text
-- | 'HashMap' of tags to the top-most layer associated with that tag
, tags :: H.HashMap T.Text T.Text
} deriving (Show, Eq)
$(deriveJSON stdOpts{ fieldLabelModifier = upperFirst } ''ImageManifest)
instance ToJSON ImageRepositories where
toJSON (ImageRepositories r) =
Object . H.unions $ [i | o@(Object i) <- (fmap toJSON r), isObject o]
where
isObject (Object _) = True
isObject _ = False
instance ToJSON ImageRepo where
toJSON (ImageRepo r t) = object [ r .= toJSON t ]
instance FromJSON ImageRepositories where
parseJSON (Object v) = ImageRepositories <$> (mapM buildRepo $ H.toList v)
where
buildRepo (k,v') = ImageRepo k <$> parseJSON v'
parseJSON v = typeMismatch "ImageRepositories" v

22
src/Data/Docker/Nix.hs Normal file
View File

@ -0,0 +1,22 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Nix
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
--
-- This module only re-exports Nix modules providing Docker-specific
-- functionality as it pertains to Nix.
----------------------------------------------------------------------------
module Data.Docker.Nix
( -- * Generating `fetchdocker` Nix Derivation Expressions
module Data.Docker.Nix.FetchDocker
) where
import Data.Docker.Nix.FetchDocker

View File

@ -0,0 +1,224 @@
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TupleSections #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Nix.FetchDocker
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Data.Docker.Nix.FetchDocker where
import Control.Lens
import Control.Monad
import Control.Monad.Except as Except
import Data.Aeson.Lens
import qualified Data.Bifunctor as Bifunctor
import Data.Coerce
import Data.Fix
import Data.Maybe
import Data.Monoid
import qualified Data.Text as T
import Data.Text.Encoding (decodeUtf8')
import Data.Text.Encoding.Error
import Nix.Expr
import URI.ByteString
import Data.Docker.Image.Types
import Data.Docker.Nix.Lib as Nix.Lib
import Lib
import Network.Wreq.Docker.Registry.V2 (pluckLayersFrom)
import Types
import Types.Exceptions
import Types.ImageTag
{- Example output of the pretty-printed, generated Nix expression AST.
{
config.docker.images.debian = pkgs.fetchdocker {
name = "debian";
registry = "https://registry-1.docker.io/v2/";
repository = "library";
imageName = "debian";
tag = "latest";
imageConfig = pkgs.fetchDockerConfig {
inherit registry repository imageName tag;
sha256 = "1viqbygsz9547jy830f2lk2hcrxjf7gl9h1xda9ws5kap8yw50ry";
};
imageLayers = let
layer0 = pkgs.fetchDockerLayer {
inherit registry repository imageName tag;
layerDigest = "10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9";
sha256 = "1fcmx3aklbr24qsjhm6cvmhqhmrxr6xlpq75mzrk0dj2gz36g8hh";
};
in [ layer0 ];
};
}
-}
-- | @fetchdocker@ derivation name.
constFetchdocker :: T.Text
constFetchdocker = "fetchdocker"
-- | @fetchDockerConfig@ derivation name.
constFetchDockerConfig :: T.Text
constFetchDockerConfig = "fetchDockerConfig"
-- | @fetchDockerLayer@ derivation name.
constFetchDockerLayer :: T.Text
constFetchDockerLayer = "fetchDockerLayer"
-- | Generate a Nix expression AST from a @HockerImageMeta@
-- record. This function crucially checks that the supplied manifest
-- JSON contains a key in the top-level object describing what version
-- of the manifest we have.
generate :: HockerImageMeta -> IO (Either HockerException NExpr)
generate dim@HockerImageMeta{..} = runExceptT $
case (manifestJSON ^? key "schemaVersion" . _Integer) of
Just 2 -> do
nixhash <- Lib.findExec "nix-hash"
configDigest <- Nix.Lib.toBase32Nix nixhash . Base16Digest $ pluckedConfigDigest
layerDigests <- forM pluckedLayerDigests $ \d16 ->
(Base16Digest d16,) <$> (Nix.Lib.toBase32Nix nixhash $ Base16Digest d16)
ExceptT (pure $ generateFetchDockerExpr dim configDigest layerDigests)
Just v ->
throwError $ HockerException ("Expected: 2 but got: " <> (show v)) Nothing Nothing
Nothing ->
throwError $ HockerException "No key 'schemaVersion' in JSON object" Nothing Nothing
where
-- 'stripHashId' is necessary because digests in the manifest are
-- prefixed by the hash algorithm used to generate them
pluckedConfigDigest = Lib.stripHashId $ manifestJSON ^. key "config" . key "digest" . _String
pluckedLayerDigests = Lib.stripHashId <$> pluckLayersFrom manifestJSON
{-| Generate a top-level Nix Expression AST from a 'HockerImageMeta'
record, a config digest, and a list of layer digests.
The generated AST, pretty-printed, may look similar to the following:
@
{
config.docker.images.debian = pkgs.fetchdocker {
name = "debian";
registry = "https://registry-1.docker.io/v2/";
repository = "library";
imageName = "debian";
tag = "latest";
imageConfig = pkgs.fetchDockerConfig {
inherit registry repository imageName tag;
sha256 = "1viqbygsz9547jy830f2lk2hcrxjf7gl9h1xda9ws5kap8yw50ry";
};
imageLayers = let
layer0 = pkgs.fetchDockerLayer {
inherit registry repository imageName tag;
layerDigest = "10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9";
sha256 = "1fcmx3aklbr24qsjhm6cvmhqhmrxr6xlpq75mzrk0dj2gz36g8hh";
};
in [ layer0 ];
};
}
@
-}
generateFetchDockerExpr :: HockerImageMeta -> ConfigDigest -> [(Base16Digest, Base32Digest)] -> Either HockerException NExpr
generateFetchDockerExpr dim@HockerImageMeta{..} configDigest layerDigests = do
let fetchconfig = mkFetchDockerConfig commonInherits configDigest
fetchlayers =
mkLets
(mkFetchDockerLayers commonInherits layerDigests)
(mkList $ fmap genLayerId [0..(Prelude.length layerDigests)-1])
fetchDockerExpr <- mkFetchDocker dim fetchconfig fetchlayers
pure (Fix $ NSet [ dockerImgExpr fetchDockerExpr ])
where
dockerImgExpr fDockerExpr = NamedVar imgSelector fDockerExpr
genLayerId i = mkSym . T.pack $ "layer" <> show i
imgSelector =
[ StaticKey "config"
, StaticKey "docker"
, StaticKey "images"
, StaticKey imageName
]
commonInherits = inherit
[ StaticKey "registry"
, StaticKey "repository"
, StaticKey "imageName"
, StaticKey "tag"
]
-- | Generate a @pkgs.fetchdocker { ... }@ function call and argument
-- attribute set. Please see 'generateNixExprs' documentation for an
-- example of full output.
mkFetchDocker :: HockerImageMeta -> NExpr -> NExpr -> Either HockerException NExpr
mkFetchDocker HockerImageMeta{..} fetchconfig fetchlayers = do
registry <- Bifunctor.first mkHockerException serializedRegistry
pure
(mkApp (mkPkgsAttrSelector constFetchdocker)
(attrsE
[ ("name", mkStr $ fromMaybe imageName altImageName)
, ("registry", mkStr registry)
, ("repository", mkStr imageRepo)
, ("imageName", mkStr imageName)
, ("tag", mkStr (T.pack $ coerce imageTag))
, ("imageConfig", fetchconfig)
, ("imageLayers", fetchlayers)
]))
where
serializedRegistry = decodeUtf8' (serializeURIRef' dockerRegistry)
mkHockerException (DecodeError err char) =
HockerException (err <> " " <> (show char)) Nothing Nothing
mkHockerException err =
HockerException (show err) Nothing Nothing
-- | Generate a @pkgs.fetchDockerConfig { ... }@ function call and
-- argument attrset. This function takes an argument for a list of
-- static keys to inherit from the parent attribute set; it helps
-- reduce the noise in the output expression.
mkFetchDockerConfig :: Binding NExpr -> Base32Digest -> NExpr
mkFetchDockerConfig inherits (Base32Digest digest) =
mkApp (mkPkgsAttrSelector constFetchDockerConfig)
(Fix $ NSet [ inherits, "sha256" $= (mkStr digest) ])
-- | Generate a list of Nix expression ASTs representing
-- @pkgs.fetchDockerLayer { ... }@ function calls. This function takes
-- an argument for a list of static keys to inherit from the parent
-- attribute set; it helps reduce the noise in the output expression.
--
-- NB: the hash digest tuple in the second argument is the base16
-- encoded hash digest plucked from the image's manifest JSON and a
-- @nix-hash@ base32 encoded copy.
--
-- This is necessary because fixed output derivations require a
-- pre-computed hash (which we have, thanks to the manifest) and the
-- hash must be base32 encoded using @nix-hash@'s own base32
-- encoding. The base16 encoded hash digest is needed intact in order
-- for the @pkgs.fetchDockerLayer@ builder script (which calls the
-- @hocker-layer@ utility) to download the layer from a docker
-- registry.
mkFetchDockerLayers :: Binding NExpr -> [(Base16Digest, Base32Digest)] -> [Binding NExpr]
mkFetchDockerLayers inherits layerDigests =
fmap mkFetchLayer $ Prelude.zip [0..(Prelude.length layerDigests)] layerDigests
where
mkLayerId i = T.pack $ "layer" <> show i
mkFetchLayer (i, ((Base16Digest d16), (Base32Digest d32))) =
(mkLayerId i) $= mkApp (mkPkgsAttrSelector constFetchDockerLayer)
(Fix $ NSet
[ inherits
, "layerDigest" $= (mkStr d16) -- Required in order to perform a registry request
, "sha256" $= (mkStr d32) -- Required by Nix for fixed output derivations
])
-- | Generate a selector for an attribute within the @pkgs@ set; i.e
-- @pkgs.fetchDockerLayer@.
mkPkgsAttrSelector :: T.Text -> NExpr
mkPkgsAttrSelector k = Fix $ NSelect (mkSym "pkgs") [StaticKey k] Nothing

View File

@ -0,0 +1,57 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TupleSections #-}
{-# LANGUAGE FlexibleContexts #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Nix.Lib
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Data.Docker.Nix.Lib where
import Control.Foldl as Foldl
import Turtle
import Control.Monad.Except as Except
import qualified Data.Text as T
import Types
import Types.Exceptions
-- | Convert a @Base16Digest@ to a @Base32Digest@ using the supplied
-- `nix-hash` utility.
--
-- NB: Nix implements its own custom base32 encoding function for
-- hashes that is not compatible with other more standard and native
-- implementations in Haskell. I opted to call out to `nix-hash`
-- instead of re-implementing their algorithm here in Haskell because
-- it's non-standard and may change, creating a maintenance headache
-- and "surprise" behavior for users.
toBase32Nix :: (MonadIO m, Except.MonadError HockerException m)
=> Prelude.FilePath -- ^ Path to the `nix-hash` executable, see @Lib.findExec@.
-> Base16Digest -- ^ @Base16@ hash digest to @Base32@ encode.
-> m Base32Digest
toBase32Nix nixhash (Base16Digest d16) =
Turtle.fold convertDigest Foldl.head >>= \case
Nothing -> throwError $ HockerException "nothing was returned by `nix-hash', not even an error" Nothing Nothing
Just result ->
either
(throwError . hockerExc . T.unpack . lineToText)
(return . Base32Digest . lineToText)
result
where
hockerExc m = HockerException m Nothing Nothing
convertDigest =
inprocWithErr
(T.pack nixhash)
[ "--type"
, "sha256"
, "--to-base32"
, d16
]
Turtle.empty

190
src/Lib.hs Normal file
View File

@ -0,0 +1,190 @@
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ViewPatterns #-}
-----------------------------------------------------------------------------
-- |
-- Module : Lib
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Lib where
import Control.Exception (throwIO)
import Control.Lens
import qualified Control.Monad.Except as Except
import Control.Monad.IO.Class (MonadIO (..))
import qualified Crypto.Hash as Hash
import qualified Data.Aeson
import qualified Data.Aeson.Encode.Pretty as AP
import Data.Aeson.Lens
import qualified Data.ByteString.Char8 as C8
import Data.ByteString.Lazy.Char8 as C8L
import Data.Char
import Data.Coerce
import Data.Monoid
import qualified Data.Text as T
import Data.Text.Encoding (encodeUtf8)
import Network.Wreq
import Nix.Expr (NExpr)
import Nix.Pretty
import System.Directory (findExecutable)
import System.Environment (getProgName)
import System.Exit as Exit
import System.FilePath.Posix as File
import System.IO (stdout)
import Text.PrettyPrint.ANSI.Leijen as Text.PrettyPrint (SimpleDoc,
displayIO,
renderPretty)
import URI.ByteString
import Data.Docker.Image.V1.Types
import Types
import Types.Exceptions
import Types.ImageName
import Types.ImageTag
-- | Throw a @userError@, exiting the program with the supplied
-- message.
die :: MonadIO io => T.Text -> io a
die = liftIO . throwIO . userError . T.unpack
-- | Print an error message to stderr and return a non-zero exit code,
-- the message is prefixed with the name of the program.
exitProgFail :: String -> IO a
exitProgFail msg = do
name <- getProgName
Exit.die $ name ++ ": " ++ msg
-- | Writes a bytestring to the provided filesystem path if it
-- @isJust@ and prints the path it wrote to the screen, otherwise
-- print the entire contents to the screen.
writeOrPrint :: Maybe FilePath -> C8L.ByteString -> IO ()
writeOrPrint o r = case o of
Just p' -> C8L.writeFile p' r >> Prelude.putStrLn p'
Nothing -> C8L.putStrLn r
-- | Make a path given a base path and a docker container name.
mkOutImage :: ImageName -> FilePath -> FilePath
mkOutImage n o = o </> (takeBaseName $ coerce n)
-- | Make a path given a base path, a docker container name, and a
-- docker container tag appending "-config.json" to the basename.
mkOutConfig :: ImageName -> ImageTag -> FilePath -> FilePath
mkOutConfig n t o = o </> Prelude.concat
[ (takeBaseName $ coerce n)
, "_", coerce t
, "-config.json"
]
-- | Make a path given a base path, a docker container name, and a
-- docker container tag appending "-manifest.json" to the basename.
mkOutManifest :: ImageName -> ImageTag -> FilePath -> FilePath
mkOutManifest n t o = o </> Prelude.concat
[ (takeBaseName $ coerce n)
, "_", coerce t
, "-manifest.json"
]
-- | Safely join a list of strings and a Network.URI record together
-- using @joinPath@.
joinURIPath :: [String] -> RegistryURI -> RegistryURI
joinURIPath pts uri@URI{..} = uri { uriPath = joinedParts }
where
joinedParts = C8.pack $ File.joinPath ("/":"v2":(C8.unpack uriPath):pts)
-- | Produce an @Options@ using @Network.Wreq.defaults@ and an @Auth@.
opts :: Maybe Auth -> Options
opts bAuth = Network.Wreq.defaults & Network.Wreq.auth .~ bAuth
-- | Hash a @Data.ByteString.Lazy.Char8@ using the SHA256 algorithm.
sha256 :: C8L.ByteString -> Hash.Digest Hash.SHA256
sha256 = Hash.hashlazy
-- | Strip the hash algorithm identifier prefix from the beginning of
-- a hash digest string; e.g: "sha256:<digest>" becomes "<digest>".
stripHashId :: T.Text -> T.Text
stripHashId = snd . T.breakOnEnd ":"
-- | Encode, following Docker's canonical JSON rules, any @ToJSON@
-- data type.
--
-- The canonicalization rules enable consistent hashing of encoded
-- JSON, a process relied upon heavily by docker for content
-- addressability and unique identification of resources within a
-- docker registry. Notably, an image's config JSON file and layers.
--
-- NB: <http://54.71.194.30:4016/registry/spec/json Docker's canonical JSON spec>
-- intentionally *does not* follow the <http://wiki.laptop.org/go/Canonical_JSON OLPC>'s
-- Canonical JSON format even though it was inspired by it.
encodeCanonical :: Data.Aeson.ToJSON a => a -> C8L.ByteString
encodeCanonical = AP.encodePretty' conf
where
-- NB: the spec requires keys to be in lexically sorted order and
-- it appears that the Ord instance of @Text@ behaves the same way
-- the Ord instance for @String@ does: it sorts lexically.
conf = AP.defConfig { AP.confIndent = AP.Spaces 0, AP.confCompare = compare }
-- | Throw an error if `Maybe FilePath` is `Nothing`, otherwise return
-- the @FilePath@ unwrapped.
requireOutPath :: (Except.MonadError HockerException m)
=> Maybe FilePath
-> m (FilePath)
requireOutPath = maybe outPathError return
where
outPathError = Except.throwError $
hockerException "To fetch and assemble a docker image, `--out=<path>` must be supplied"
-- | Pluck the digest value for the config JSON given a docker
-- registry image manifest. Attempting to parse and return the digest
-- value as a `Digest SHA256`, otherwise throwing an error.
getConfigDigest :: (Except.MonadError HockerException m)
=> C8L.ByteString
-> m (Hash.Digest Hash.SHA256)
getConfigDigest (view (key "config" . key "digest" . _String) -> digest) =
maybe badDigest return parsedDigest
where
parsedDigest = toDigest $ encodeUtf8 digest
badDigest = Except.throwError $ hockerException "Failed parsing the config hash digest"
-- | @upperFirst@ uppercases the first letter of the string.
upperFirst :: String -> String
upperFirst [] = []
upperFirst (h:t) = toUpper h : t
-- | Split a docker image's name on the forward slash separator so we
-- get the distinct repo name and image name.
splitImgName :: ImageName -> (RepoNamePart, ImageNamePart)
splitImgName (ImageName (T.pack -> n)) = over _2 T.tail $ T.break (=='/') n
-- | Pretty print a Nix expression and return a
-- @Text.PrettyPrint.SimpleDoc@, this can in turn be displayed to the
-- screen using @Text.PrettyPrint.displayIO@ or transformed into a
-- string using @Text.PrettyPrint.displayS@.
renderNixExpr :: NExpr -> Text.PrettyPrint.SimpleDoc
renderNixExpr = renderPretty 0.4 120 . prettyNix
-- | Pretty print a Nix expression AST and print to stdout.
pprintNixExpr :: NExpr -> IO ()
pprintNixExpr = displayIO stdout . renderNixExpr
-- | Given an executable's name, try to find it in the current
-- process's PATH context.
findExec :: (MonadIO m, Except.MonadError HockerException m)
=> String
-> m Prelude.FilePath
findExec execname = (liftIO $ findExecutable execname) >>= \case
Just v -> return v
Nothing -> Except.throwError $
HockerException
("cannot find executable `" <> execname <> "'")
Nothing
Nothing

View File

@ -0,0 +1,150 @@
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ViewPatterns #-}
-----------------------------------------------------------------------------
-- |
-- Module : Network.Wreq.Docker.Image.Lib
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Network.Wreq.Docker.Image.Lib where
import qualified Codec.Archive.Tar as Tar
import qualified Codec.Compression.GZip as GZip
import qualified Control.Concurrent.PooledIO.Final as Pool
import Control.Lens
import Control.Monad.Except
import Control.Monad.Reader
import qualified Data.ByteString.Lazy.Char8 as C8L
import Data.Coerce
import qualified Data.HashMap.Strict as HashMap
import Data.Monoid
import qualified Data.Text as Text
import qualified Network.Wreq as Wreq
import qualified System.Directory as Directory
import System.FilePath.Posix as File
import System.Terminal.Concurrent
import Data.Docker.Image.V1_2.Types
import Lib
import Network.Wreq.Docker.Registry.V2 as Docker.Registry
import Types
import Types.Exceptions
import Types.ImageTag
-- | Like @mapM@ but concurrently applies a function to the elements
-- of the @Traversable@, limiting the maximum number of worker threads
-- by *n*.
mapPool :: Traversable t
=> Int -- ^ Number of pooled worker threads
-> ((String -> IO ()) -> a -> Hocker FilePath) -- ^ Processing function
-> t a -- ^ A Traversable container
-> Hocker (t (Either HockerException FilePath))
mapPool n f l = do
env <- ask
writeC <- liftIO getConcurrentOutputter
let f' v = (runHocker (f writeC v) env)
-- TODO: because I'm re-wrapping the function traversing the
-- traversable, I need to extract the Left's from the result and
-- propagate an error up with @throwError@ from this function.
--
-- TODO: refactor this such that the previous TODO is unnecessary.
liftIO . Pool.runLimited n $ traverse (Pool.fork . f') l
-- | Like @mapPool@ but with the arguments flipped.
forPool :: Traversable t
=> Int -- ^ Number of pooled worker threads
-> t a -- ^ A Traversable container
-> ((String -> IO ()) -> a -> Hocker FilePath) -- ^ Processing function
-> Hocker (t (Either HockerException FilePath))
forPool n = flip $ mapPool n
-- | Download, verify, decompress, and write a docker container image
-- layer to the filesystem.
fetchLayer :: (String -> IO ()) -- ^ Concurrent terminal output function
-> (RefLayer, Layer) -- ^ A tuple of the reference layer hash digest from the image's config JSON and the hash digest from the image's manifest JSON
-> Hocker FilePath
fetchLayer writeC layer@(refl, (stripHashId -> layer')) = ask >>= \HockerMeta{..} -> do
liftIO . writeC . Text.unpack $ "Downloading layer: " <> (Text.take 7 layer')
fetchedImageLayer <- checkResponseIntegrity' =<< (Docker.Registry.fetchLayer $ snd layer)
let decompressed = fetchedImageLayer & Wreq.responseBody %~ GZip.decompress
shortRef = Text.take 7 refl
imageOutDir <- Lib.requireOutPath outDir
liftIO $ writeC " => decompressed "
let layerOutPath = File.joinPath [imageOutDir, Text.unpack refl] `addExtension` "tar"
layerPath <- writeRespBody layerOutPath refl decompressed
liftIO . writeC $ Text.unpack ("=> wrote " <> shortRef)
return layerPath
-- | Generate a @manifest.json@ file.
createImageManifest :: RepoTag -- ^ e.g: registry.mydomain.net:5001/reponame/imagename
-> FilePath -- ^ Path of image config file for manifest
-> [RefLayer] -- ^ Layer hash digests sourced from the image's config JSON
-> Hocker ()
createImageManifest repoTag imageConfigFile refls = ask >>= \HockerMeta{..} -> do
let imageManifest = [
ImageManifest
(takeBaseName imageConfigFile `addExtension` "json")
[Text.pack (repoTag ++ ":" ++ coerce imageTag)]
(fmap ((`addExtension` "tar") . Text.unpack) refls) ]
imageOutDir <- Lib.requireOutPath outDir
liftIO $ C8L.writeFile
(imageOutDir </> "manifest" `addExtension` "json")
(Lib.encodeCanonical imageManifest)
-- | Generate a @repositories@ json file.
--
-- NB: it is JSON but Docker doesn't want it with a JSON extension
-- unlike its sibling the @manifest.json@ file.
createImageRepository :: RepoTag -- ^ e.g: registry.mydomain.net:5001/reponame/imagename
-> [RefLayer] -- ^ Layer hash digests sourced from the image's configuration JSON
-> Hocker ()
createImageRepository repoTag refls = ask >>= \HockerMeta{..} -> do
let repositories =
ImageRepo
(Text.pack repoTag)
-- Create a singleton map from a tag and the "latest" layer;
-- Aeson will correctly encode this as an object with a key
-- (the tag) and value (the layer within the archive named
-- by its hash digest)
(HashMap.singleton
(Text.pack $ coerce imageTag)
((Prelude.last refls) <> ".tar"))
imageOutDir <- Lib.requireOutPath outDir
liftIO $ C8L.writeFile
(imageOutDir </> "repositories")
(Lib.encodeCanonical repositories)
-- | Tar and gzip the output dir into the final docker image archive
-- and remove the output dir.
createImageTar :: Hocker FilePath
createImageTar = ask >>= \HockerMeta{..} -> do
imageOutDir <- Lib.requireOutPath outDir
archivePath <- Lib.requireOutPath out
entries <- liftIO $ Directory.getDirectoryContents imageOutDir
-- TODO: remove once we have a newer `directory`
let entriesToPack = [e | e <- entries, e /= ".", e /= ".."]
liftIO $ Tar.create archivePath imageOutDir entriesToPack
-- Cleanup after ourselves
liftIO $ Directory.removeDirectoryRecursive imageOutDir
return $ archivePath

View File

@ -0,0 +1,158 @@
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ScopedTypeVariables #-}
-----------------------------------------------------------------------------
-- |
-- Module : Network.Wreq.Docker.Image.V1_2
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Network.Wreq.Docker.Image.V1_2 where
import Control.Lens
import Control.Monad
import Control.Monad.Except
import Control.Monad.Reader
import Data.ByteString.Lazy.Char8 as C8L
import Data.Coerce
import Data.Either
import Data.HashSet as Set
import Data.Monoid
import qualified Data.Text as T
import NeatInterpolation
import qualified Network.Wreq as Wreq
import System.FilePath.Posix as File
import System.Terminal.Concurrent
import Data.Docker.Image.V1.Types
import Lib
import Network.Wreq.Docker.Image.Lib as Docker.Image
import Network.Wreq.Docker.Registry.V2 as Docker.Registry
import Types
import Types.Exceptions
import Types.ImageName
-- | Fetches an image from the specified (or default) V2 Docker
-- Registery and assembles the artifacts into a compatible Docker V1.2
-- Image.
fetchAndAssemble :: HockerMeta -> IO (Either HockerException T.Text)
fetchAndAssemble = runHocker doFetchImage
-- | Fetches a layer by its digest key from the specified (or default)
-- V2 Docker Registery.
fetchLayer :: HockerMeta -> IO (Either HockerException FilePath)
fetchLayer = runHocker doFetchLayer
-- | Fetches the config file of the specified image from the specified
-- (or default) V2 Docker Registry and returns it.
fetchConfig :: HockerMeta -> IO (Either HockerException C8L.ByteString)
fetchConfig = runHocker doFetchConfig
-- | Fetches the manifest file of the specified image from the
-- specified (or default) V2 Docker Registry and returns it.
fetchImageManifest :: HockerMeta -> IO (Either HockerException C8L.ByteString)
fetchImageManifest = runHocker doFetch
where
doFetch = fetchManifest >>= return . view Wreq.responseBody
-- | Executes the monadic logic for fetching the docker image config
-- JSON within the ReaderT monad.
doFetchConfig :: Hocker C8L.ByteString
doFetchConfig = ask >>= \HockerMeta{..} -> do
configDigest <-
fetchManifest
>>= checkResponseIntegrity'
>>= getConfigDigest . view Wreq.responseBody
fetchImageConfig configDigest
>>= return . view Wreq.responseBody
-- | Executes the monadic logic for fetching and saving a layer tar
-- archive.
doFetchLayer :: Hocker FilePath
doFetchLayer = ask >>= \HockerMeta{..} -> do
layerOut <- Lib.requireOutPath out
layerDigest <- T.pack . show <$> maybe
(throwError $ hockerException
"a layer digest is expected!")
return
imageLayer
let shortRef = T.take 7 layerDigest
writeC <- liftIO $ getConcurrentOutputter
liftIO . writeC . T.unpack $ "Downloading layer: " <> shortRef
fetchedImageLayer <- checkResponseIntegrity' =<< Docker.Registry.fetchLayer ("sha256:" <> layerDigest)
layerPath <- writeRespBody layerOut layerDigest fetchedImageLayer
liftIO . writeC $ T.unpack ("=> wrote " <> shortRef)
return layerPath
-- | Executes the monadic logic for fetching, transforming, and
-- assembling a docker container image.
doFetchImage :: Hocker T.Text
doFetchImage = ask >>= \HockerMeta{..} -> do
imageOutDir <- Lib.requireOutPath outDir
manifest <- fetchManifest >>= checkResponseIntegrity'
configDigest <- getConfigDigest $ manifest ^. Wreq.responseBody
-- TODO: ALL of the below steps that handle saving things to the
-- disk should probably be wrapped in a bracket function responsible
-- for cleaning up any partially written data if there's a
-- failure... Or perhaps instad of bracketing in here, we bracket
-- around the @runExceptT@?
-- Fetch and write the configuration json file for the image
let configFileHash = Lib.stripHashId . T.pack $ showSHA configDigest
imageConfig <- fetchImageConfig configDigest
imageConfigFile <- writeRespBody
(File.joinPath [imageOutDir, T.unpack configFileHash] `addExtension` "json")
configFileHash
imageConfig
let refLayers = pluckRefLayersFrom $ imageConfig ^. Wreq.responseBody
refLayers' = fmap Lib.stripHashId refLayers
refLayerSet = Set.fromList refLayers'
manifestLayers = pluckLayersFrom $ manifest ^. Wreq.responseBody
(_, strippedReg) = T.breakOnEnd "//" . T.pack . show $ dockerRegistry
repoTags = (T.unpack strippedReg) </> (coerce imageName)
-- Concurrently fetch layers and write to disk with a limit of three
-- threads
layers <- mapPool 3 Docker.Image.fetchLayer $ Prelude.zip refLayers' manifestLayers
let writtenLayerSet = Set.fromList . fmap (T.pack . takeBaseName) $ rights layers
refLayerSetTxt = T.pack (show refLayerSet)
wrtLayerSetTxt = T.pack (show writtenLayerSet)
dffLayerSetTxt = T.pack (show $ Set.difference refLayerSet writtenLayerSet)
when (writtenLayerSet /= refLayerSet) $
throwError . hockerException $ T.unpack
([text|
Written layers do not match the reference layers!
Reference layers: ${refLayerSetTxt}
Written layers: ${wrtLayerSetTxt}
Difference: ${dffLayerSetTxt}
|])
createImageRepository repoTags refLayers'
createImageManifest repoTags imageConfigFile refLayers'
archivePath <- createImageTar
return $ T.pack archivePath

View File

@ -0,0 +1,231 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE MultiWayIf #-}
{-# LANGUAGE TupleSections #-}
{-# LANGUAGE ViewPatterns #-}
{-# LANGUAGE FlexibleContexts #-}
-----------------------------------------------------------------------------
-- |
-- Module : Network.Wreq.Docker.Registry.V2
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
--
-- Convenience functions for interacting with an instance of Docker
-- Distribution (Docker Registry V2). I've kept the module naming
-- consistent with the docker registry terms since that appears to be
-- what everyone uses colloquially even though the formal name for the
-- software is "docker distribution".
----------------------------------------------------------------------------
module Network.Wreq.Docker.Registry.V2 where
import Control.Lens
import qualified Control.Monad.Except as Except
import Control.Monad.Reader
import Data.Monoid
import qualified Crypto.Hash as Hash
import Data.Aeson.Lens
import Data.ByteString.Lazy.Char8 as C8L
import qualified Data.ByteString.Char8 as C8
import Data.Text.Encoding (decodeUtf8, encodeUtf8)
import URI.ByteString
import NeatInterpolation
import qualified Data.Text as Text
import qualified Network.Wreq as Wreq
import System.Directory
import Data.Docker.Image.V1.Types
import Lib
import Types
import Types.Exceptions
import Types.ImageName
import Types.ImageTag
-- | Default docker hub registry.
defaultRegistry :: URIRef Absolute
defaultRegistry = URI
{ uriScheme = Scheme "https"
, uriAuthority = Just $ Authority
{ authorityUserInfo = Nothing
, authorityHost = Host "registry-1.docker.io"
, authorityPort = Nothing
}
, uriPath = "/v2/"
, uriQuery = Query []
, uriFragment = Nothing
}
mkAuth :: RegistryURI
-> ImageName
-> Maybe Credentials
-> IO (Maybe Wreq.Auth)
mkAuth reg (ImageName img) credentials =
case credentials of
Just (BearerToken token)
-> pure (Just $ Wreq.oauth2Bearer (encodeUtf8 token))
Just (Basic username password)
-> pure (Just $ Wreq.basicAuth (encodeUtf8 username) (encodeUtf8 password))
Nothing | reg /= defaultRegistry
-> pure Nothing
| otherwise
-> getHubToken >>= pure . mkHubBearer
where
getHubToken = Wreq.get ("https://auth.docker.io/token?service=registry.docker.io&scope=repository:"<>img<>":pull")
mkHubBearer rsp = (Wreq.oauth2Bearer . encodeUtf8) <$> (rsp ^? Wreq.responseBody . key "token" . _String)
-- | Retrieve a list of layer hash digests from an image's manifest
-- JSON.
--
-- TODO: pluck out the layer's size and digest into a tuple.
pluckLayersFrom :: Manifest -> [Layer]
pluckLayersFrom = toListOf (key "layers" . values . key "digest" . _String)
-- | Retrieve a list of layer hash digests from an image's config
-- JSON.
--
-- This is subtly different from @pluckLayersFrom@ because both list
-- hash digests for the image's layers but the manifest's layer hash
-- digests are keys into the registry's blob storage referencing the
-- *compressed* layer archive. The config JSON's layer hash digests
-- reference the uncompressed layer tar archives within the image.
pluckRefLayersFrom :: ImageConfigJSON -> [Layer]
pluckRefLayersFrom = toListOf (key "rootfs" . key "diff_ids" . values . _String)
-----------------------------------------------------------------------------
-- Top-level docker-registry V2 REST interface functions
-- | Request a V2 registry manifest for the specified docker image.
fetchManifest :: Hocker RspBS
fetchManifest = ask >>= \HockerMeta{..} ->
liftIO $ Wreq.getWith (opts auth & accept) (mkURL imageName imageTag dockerRegistry)
where
mkURL (ImageName n) (ImageTag t) r = C8.unpack (serializeURIRef' $ Lib.joinURIPath [n, "manifests", t] r)
accept = Wreq.header "Accept" .~
[ "application/vnd.docker.distribution.manifest.v2+json"
, "application/vnd.docker.distribution.manifest.list.v2+json"
]
-- | Retrieve the config json of an image by its hash digest (found in
-- the V2 manifest for an image given by a name and tag).
fetchImageConfig :: (Hash.Digest Hash.SHA256) -> Hocker RspBS
fetchImageConfig (showSHA -> digest) = ask >>= \HockerMeta{..} ->
liftIO $ Wreq.getWith (opts auth) (mkURL imageName dockerRegistry)
where
mkURL (ImageName n) r = C8.unpack (serializeURIRef' $ Lib.joinURIPath [n, "blobs", digest] r)
-- | Retrieve a compressed layer blob by its hash digest.
--
-- TODO: take advantage of registry's support for the Range header so
-- we can stream downloads.
fetchLayer :: Layer -> Hocker RspBS
fetchLayer layer = ask >>= \HockerMeta{..} ->
liftIO $ Wreq.getWith (opts auth) (mkURL layer imageName dockerRegistry)
where
mkURL
(Text.unpack -> digest)
(ImageName name)
registry
= C8.unpack (serializeURIRef' $ joinURIPath [name, "blobs", digest] registry)
-- | Write a @Wreq@ response body to the specified @FilePath@,
-- checking the integrity of the file with its sha256 hash digest.
--
-- The second argument, the @StrippedDigest@, must be a hash digest
-- stripped of the "sha256:" hash algorithm identifier prefix.
writeRespBody :: FilePath -- ^ Filesystem path to write the content to
-> StrippedDigest -- ^ Hash digest, stripped of its hash algorithm identifier prefix
-> RspBS -- ^ Wreq lazy bytestring response object
-> Hocker FilePath
writeRespBody out digest resp = do
liftIO . C8L.writeFile out $ resp ^. Wreq.responseBody
-- Now, verify the file; we assume the sha256 function since that is
-- used everywhere
verified <- liftIO $ checkFileIntegrity out digest
either (Except.throwError . hockerException) return verified
-- | Write a response to the filesystem without a request hash
-- digest. Attempt to fetch the value of the `ETag` header to verify
-- the integrity of the content received.
--
-- The Docker docs do *not* recommended this method for verification
-- because the ETag and Docker-Content-Digest headers may change
-- between the time you issue a request with a digest and when you
-- receive a response back!
writeRespBody' :: FilePath -- ^ Filesystem path to write the content to
-> RspBS -- ^ Wreq lazy bytestring response object
-> Hocker FilePath
writeRespBody' out r = writeRespBody out etagHash r
where
etagHash = decodeUtf8 $ r ^. Wreq.responseHeader "ETag"
-- | Compute a sha256 hash digest of the response body and compare it
-- against the supplied hash digest.
checkResponseIntegrity :: (Except.MonadError HockerException m)
=> RspBS -- ^ Wreq lazy bytestring response object
-> StrippedDigest -- ^ Hash digest, stripped of its hash algorithm identifier prefix
-> m RspBS
checkResponseIntegrity r d = do
let contentHash = show . Lib.sha256 $ r ^. Wreq.responseBody
digestHash = Text.unpack d
if | contentHash == digestHash -> pure r
| otherwise ->
let chTxt = Text.pack contentHash
dgTxt = Text.pack digestHash
in Except.throwError
(hockerException
(Text.unpack [text|
Response content hash is $chTxt
and it does not match the addressable content hash
$dgTxt
|]))
-- | Compute a sha256 hash digest of the response body and compare it
-- against the @Docker-Content-Digest@ header from the response.
--
-- The Docker docs do *not* recommended this method for verification
-- because the Docker-Content-Digest header may change between the
-- time you issue a request with a digest and when you receive a
-- response back!
--
-- NB: some registries do not send a @Docker-Content-Digest@ header,
-- I'm not sure yet what the cause for this is but this function's
-- behavior lacking that information is to ignore the hash check.
checkResponseIntegrity' :: (Except.MonadError HockerException m)
=> RspBS -- ^ Wreq lazy bytestring response object
-> m RspBS
checkResponseIntegrity' rsp =
case decodeUtf8 (rsp ^. Wreq.responseHeader "Docker-Content-Digest") of
-- Since some registries may send back no Docker-Content-Digest
-- header, or an empty one, if it is empty then ignore it
"" -> pure rsp
digest -> checkResponseIntegrity rsp (Lib.stripHashId digest)
-- | Compute a sha256 hash digest for a file and compare that hash to
-- the supplied hash digest.
checkFileIntegrity :: FilePath -- ^ Filesystem path of file to verify
-> StrippedDigest -- ^ Hash digest, stripped of its hash algorithm identifier prefix
-> IO (Either String FilePath)
checkFileIntegrity fp digest =
Except.runExceptT $ do
exists <- liftIO (doesFileExist fp)
when (not exists) $
fail (fp <> " does not exist")
fileHash <- liftIO (return . show . Lib.sha256 =<< C8L.readFile fp)
when (Text.unpack digest /= fileHash) $
let fhTxt = Text.pack fileHash
fpTxt = Text.pack fp
in fail $ Text.unpack
([text|
The sha256 hash for $fpTxt: $fhTxt
Does not match the expected digest: $digest
|])
return fp

View File

@ -0,0 +1,60 @@
{-# LANGUAGE CPP #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE ViewPatterns #-}
-----------------------------------------------------------------------------
-- |
-- Module : Network.Wreq.ErrorHandling
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Network.Wreq.ErrorHandling where
import Control.Exception.Lifted as Lifted
import Control.Lens
import Control.Monad.Except
import Data.ByteString.Char8 as C8
import Data.Monoid
import Network.HTTP.Client
import Network.HTTP.Types.Status
#if !MIN_VERSION_http_client(0,5,0)
import Data.HashMap.Lazy as H
#endif
import Types.Exceptions
interceptHttpExc :: ExceptT HockerException IO a
-> ExceptT HockerException IO a
interceptHttpExc a = Lifted.try a >>= except . over _Left prettify
where
except (Left e) = throwError e
except (Right v) = return v
prettify :: HttpException -> HockerException
#if MIN_VERSION_http_client(0,5,0)
prettify
(HttpExceptionRequest _
(StatusCodeException
(responseStatus -> (Status code msg)) body))
= HockerException
(show code <> " " <> C8.unpack msg)
(Just $ C8.unpack body)
Nothing
#else
prettify
(StatusCodeException (Status code msg) (H.fromList -> e) _)
= HockerException
((show code) <> " " <> C8.unpack msg)
(C8.unpack <$> H.lookup "X-Response-Body-Start" e)
Nothing
#endif
prettify e = HockerException (show e) Nothing Nothing

189
src/Types.hs Normal file
View File

@ -0,0 +1,189 @@
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE DuplicateRecordFields #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE PartialTypeSignatures #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE ViewPatterns #-}
{-# OPTIONS -fno-warn-orphans #-}
-----------------------------------------------------------------------------
-- |
-- Module : Types
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Types where
import Control.Applicative
import Control.Monad.Error.Class
import qualified Control.Monad.Except as Except
import Control.Monad.IO.Class
import qualified Control.Monad.Reader as Reader
import Control.Monad.Reader.Class
import qualified Crypto.Hash as Hash
import qualified Data.ByteString.Lazy
import Data.Monoid
import Data.Text (Text)
import qualified Data.Text as Text
import Network.Wreq
import qualified Network.Wreq as Wreq
import Network.Wreq.ErrorHandling
import qualified Options.Applicative as Options
import Options.Generic
import URI.ByteString
import Types.Exceptions
import Types.Hash ()
import Types.ImageName
import Types.ImageTag
import Types.URI ()
-- | Docker registry URI.
type RegistryURI = (URIRef Absolute)
-- | Docker registry username.
type Username = Text
-- | Docker registry user password.
type Password = Text
-- | Docker image layer sha256 hash digest.
type Layer = Text
-- | SHA256 hash digest with the hash algorithm identifier prefix,
-- stripped
type StrippedDigest = Text
-- | Docker image manifest JSON.
type Manifest = Data.ByteString.Lazy.ByteString
-- | Docker image config JSON.
type ImageConfigJSON = Data.ByteString.Lazy.ByteString
-- | Wreq response type parameterized by the lazy bytestring type.
type RspBS = Network.Wreq.Response Data.ByteString.Lazy.ByteString
-- | A file extension.
type Extension = String
-- | RepoName is the part before the forward slash in a docker image
-- name, e.g: @library@ in @library/debian@
type RepoNamePart = Text
-- | ImageName is the part after the forward slash in a docker image
-- name, e.g: @library@ in @library/debian@
type ImageNamePart = Text
-- | Docker image config JSON file's sha256 hash digest in Nix's
-- base32 encoding.
--
-- NB: it's very important to realize there's a significant difference
-- between Nix's base32 encoding and the standard base32 encoding!
-- (i.e, they're not compatible).
type ConfigDigest = Base32Digest
-- | Generic top-level optparse-generic CLI args data type and
-- specification.
--
-- NOTE: `hocker-layer` does not use this data type because it
-- requires an additional layer sha256 hash digest argument.
data OptArgs w = OptArgs
{ -- | URI for the registry, optional
registry :: w ::: Maybe RegistryURI
<?> "URI of registry, defaults to the Docker Hub registry"
, credentials :: Maybe Credentials
-- | Filesystem path to write output to
, out :: w ::: Maybe FilePath
<?> "Write content to location"
-- | Docker image name (includes the reponame, e.g: library/debian)
, imageName :: ImageName
-- | Docker image tag
, imageTag :: ImageTag
} deriving (Generic)
instance ParseRecord (OptArgs Wrapped)
deriving instance Show (OptArgs Unwrapped)
-- | Hocker 'ExceptT' and 'ReaderT' transformer stack threading a
-- 'HockerMeta' data type.
newtype Hocker a = Hocker { unHocker :: Reader.ReaderT HockerMeta (Except.ExceptT HockerException IO) a }
deriving
( Functor
, Applicative
, Monad
, MonadIO
, MonadReader HockerMeta
, MonadError HockerException
)
runHocker :: Hocker a -> HockerMeta -> IO (Either HockerException a)
runHocker (unHocker -> d) = Except.runExceptT . interceptHttpExc . Reader.runReaderT d
-- | Red wagon record carrying around the environment as we fetch,
-- transform, and assemble docker image artifacts.
data HockerMeta = HockerMeta
{ dockerRegistry :: RegistryURI
, auth :: Maybe Wreq.Auth
, imageName :: ImageName
, imageTag :: ImageTag
, out :: Maybe FilePath
, outDir :: Maybe FilePath
, imageLayer :: Maybe (Hash.Digest Hash.SHA256)
} deriving (Show)
-- | Newtype base32 encoding of a hash digest.
--
-- Please note, this base32 encoding is unique to Nix and not
-- compatible with other base32 encodings.
newtype Base32Digest = Base32Digest Text
deriving (Show, Read, Eq)
-- | Newtype base16 encoding of a hash digest.
--
-- This encoding has no known idiosyncracies specific to Nix, it
-- should be compatible with other tools and library's expectations.
newtype Base16Digest = Base16Digest Text
deriving (Show, Read, Eq)
data Credentials = Basic Username Password | BearerToken Text
deriving (Show)
instance ParseField Credentials where
parseField _ _ = (Basic <$> parseUsername <*> parsePassword) <|> (BearerToken <$> parseToken)
where
parseUsername = Text.pack <$>
(Options.option Options.str $
( Options.metavar "BASIC USERNAME"
<> Options.long "username"
<> Options.short 'u'
<> Options.help "Username part of a basic auth credential"
)
)
parsePassword = Text.pack <$>
(Options.option Options.str $
( Options.metavar "BASIC PASSWORD"
<> Options.long "password"
<> Options.short 'p'
<> Options.help "Password part of a basic auth credential"
)
)
parseToken = Text.pack <$>
(Options.option Options.str $
( Options.metavar "BEARER TOKEN"
<> Options.long "token"
<> Options.short 't'
<> Options.help "Bearer token retrieved from a call to `docker login` (mutually exclusive to --username and --password)"
)
)
instance ParseFields Credentials
instance ParseRecord Credentials where
parseRecord = fmap Options.Generic.getOnly parseRecord

39
src/Types/Exceptions.hs Normal file
View File

@ -0,0 +1,39 @@
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE ScopedTypeVariables #-}
-----------------------------------------------------------------------------
-- |
-- Module : Types.Exceptions
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Types.Exceptions where
import Control.DeepSeq
import Control.Exception
import Data.Monoid
import GHC.Generics
data HockerException = HockerException
{ baseMsg :: String
, expected :: Maybe String
, received :: Maybe String
} deriving (Read, Generic, NFData)
instance Exception HockerException
instance Show HockerException where
show (HockerException m e r) = m <> (ext $ e <> r)
where
ext (Just v) = "; " <> v
ext Nothing = mempty
hockerException :: String -> HockerException
hockerException m = HockerException m Nothing Nothing

43
src/Types/Hash.hs Normal file
View File

@ -0,0 +1,43 @@
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE OverloadedStrings #-}
{-# OPTIONS -fno-warn-orphans #-}
-----------------------------------------------------------------------------
-- |
-- Module : Types.Hash
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Types.Hash where
import qualified Crypto.Hash as Hash
import qualified Data.ByteArray as BA
import qualified Data.ByteArray.Encoding as BA
import qualified Data.ByteString.Char8 as C8
import Data.Monoid
import qualified Data.Text
import qualified Options.Applicative as Options
import Options.Generic
toBytes :: C8.ByteString -> Either String BA.Bytes
toBytes = BA.convertFromBase BA.Base16
readSHA256 :: C8.ByteString -> Maybe (Hash.Digest Hash.SHA256)
readSHA256 = either (const Nothing) Hash.digestFromByteString . toBytes
instance ParseField (Hash.Digest Hash.SHA256) where
parseField h _ =
(Options.option (Options.maybeReader (readSHA256 . C8.pack)) $
( Options.metavar "SHA256"
<> Options.short 'l'
<> Options.long "layer"
<> maybe mempty (Options.help . Data.Text.unpack) h
)
)
instance ParseFields (Hash.Digest Hash.SHA256) where
instance ParseRecord (Hash.Digest Hash.SHA256) where
parseRecord = fmap getOnly parseRecord

38
src/Types/ImageName.hs Normal file
View File

@ -0,0 +1,38 @@
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE OverloadedStrings #-}
{-# OPTIONS -fno-warn-orphans #-}
-----------------------------------------------------------------------------
-- |
-- Module : Types.ImageName
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Types.ImageName where
import Control.DeepSeq
import Data.Monoid
import qualified Options.Applicative as Options
import Options.Generic
newtype ImageName = ImageName { unImageName :: String }
deriving (Generic, Show)
instance ParseField ImageName where
parseField _ _ =
ImageName <$>
(Options.argument Options.str $
( Options.metavar "IMAGE-NAME"
<> Options.help "Docker image name, e.g: 'debian' in debian:jessie"
)
)
instance ParseFields ImageName where
instance ParseRecord ImageName where
parseRecord = fmap getOnly parseRecord
instance NFData ImageName

38
src/Types/ImageTag.hs Normal file
View File

@ -0,0 +1,38 @@
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE OverloadedStrings #-}
{-# OPTIONS -fno-warn-orphans #-}
-----------------------------------------------------------------------------
-- |
-- Module : Types.ImageTag
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Types.ImageTag where
import Control.DeepSeq
import Data.Monoid
import qualified Options.Applicative as Options
import Options.Generic
newtype ImageTag = ImageTag { unImageTag :: String }
deriving (Generic, Show)
instance ParseField ImageTag where
parseField _ _ =
ImageTag <$>
(Options.argument Options.str $
( Options.metavar "IMAGE-TAG"
<> Options.help "Docker image tag identifier, e.g: 'jessie' in debian:jessie"
)
)
instance ParseFields ImageTag where
instance ParseRecord ImageTag where
parseRecord = fmap getOnly parseRecord
instance NFData ImageTag

44
src/Types/URI.hs Normal file
View File

@ -0,0 +1,44 @@
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE ViewPatterns #-}
{-# OPTIONS -fno-warn-orphans #-}
-----------------------------------------------------------------------------
-- |
-- Module : Types.URI
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Types.URI where
import Control.Lens
import qualified Data.ByteString.Char8 as C8
import Data.Monoid
import qualified Data.Text as Text
import qualified Options.Applicative as Options
import Options.Applicative.Builder
import Options.Generic
import URI.ByteString
-- | Parse a URI value.
uriReader :: ReadM (URIRef Absolute)
uriReader = Options.eitherReader parseURIArg
where
parseURIArg (parseURI strictURIParserOptions . C8.pack -> parsedURI) =
over _Left show parsedURI
instance ParseField (URIRef Absolute) where
parseField h n =
(Options.option uriReader $
( Options.metavar "URI"
<> foldMap (Options.long . Text.unpack) n
<> foldMap (Options.help . Text.unpack) h
)
)
instance ParseFields (URIRef Absolute) where
instance ParseRecord (URIRef Absolute) where
parseRecord = fmap getOnly parseRecord

10
stack.yaml Normal file
View File

@ -0,0 +1,10 @@
flags: {}
nix:
shell-file: nix/shell.nix
extra-package-dbs: []
packages:
- '.'
extra-deps:
- concurrentoutput-0.2.0.2
- hnix-0.3.4
resolver: lts-8.12

19
test/Main.hs Normal file
View File

@ -0,0 +1,19 @@
module Main where
import Test.Tasty
import Test.Tasty.HUnit
import qualified Tests.Data.Docker.Image.V1 as ImageV1Tests
import qualified Tests.Data.Docker.Image.V1_2 as ImageV1_2Tests
import qualified Tests.Data.Docker.Nix.FetchDocker as FetchDockerTests
main :: IO ()
main = defaultMain tests
tests :: TestTree
tests = testGroup "Tests"
[ ImageV1Tests.unitTests
, ImageV1_2Tests.unitTests
, FetchDockerTests.tests
]

View File

@ -0,0 +1,86 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Image.V1
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Tests.Data.Docker.Image.V1 where
import qualified Crypto.Hash as Hash
import qualified Data.ByteString.Char8 as C8
import Data.Docker.Image.V1.Layer
import Data.Docker.Image.V1.Types
import Data.Maybe
import Data.Sequence as Seq
import Test.Tasty
import Test.Tasty.HUnit
import Lib
unitTests = testGroup "V1 Image Tests"
[ testCase "Digest (De)Serialization" testDigest
, testCase "Handle bad digest" testBadDigest1
, testCase "Handle bad digest" testBadDigest2
, testCase "Digest == ChainID" testChainID
, testCase "Digest == DiffID" testDiffID
, testCase "ChainID sequence generation" testChainIDGeneration
]
mkHash = Lib.sha256 "somestring"
-- DiffID sequence from a real Docker Image.
diffIds :: Seq DiffID
diffIds = fromList $ fmap (DiffID . fromJust . toDigest)
[ "sha256:f96222d75c5563900bc4dd852179b720a0885de8f7a0619ba0ac76e92542bbc8"
, "sha256:149636c850120e59e6bb79f2fc23ed423030afc73841c221906a147d61da11a9"
, "sha256:33c3a104206aed2ae947e03c48cc011af0a3e5b87e7ba8e7cbc298273a638921"
, "sha256:2681a05b8f8288a384dbddf0b899ec9d2bea3ee265f1678230d0bdac6dc13da1"
, "sha256:dcfda398b984bb5a55e1932079b6cc4823e433bd6c962f9667eaf28b0f1fe7e0"
, "sha256:2a182bf72d68b9c7cb76be0f9dcadd047088ae6f8cb85e7ac9661f68537acccd"
, "sha256:647af69f55fd5fdc27db7b6aa51945aec53b0b03d17095e79b4c69c6432195c7"
, "sha256:c7ef4827bb9592e9788c1cc49e3db4e265c12f49c9b1f6c9bb99551eb5189020"
, "sha256:f9361c1f9b1eb2d93709546fe6ad48786cea55c03c4e52d3f1cdb341e0d398da"
]
-- Pre-computed golden result produced by a valid Python
-- implementation of the ChainID sequence generation logic.
preComputedChainIds :: Seq ChainID
preComputedChainIds = fromList $ fmap (ChainID . fromJust . toDigest)
[ "sha256:f96222d75c5563900bc4dd852179b720a0885de8f7a0619ba0ac76e92542bbc8"
, "sha256:5e6f832cd2df18460af48ed117c5b63bc2189971c9346e6d952376b5a8ba74ff"
, "sha256:19947c09eddb9dab0d1b938660cd72ea4bb8f0f24c604cf9e1d9b14772d7bd6d"
, "sha256:b0fbea1a99ec834d59e524733f1be81f1dce325dbe9df58bba5dec7014b386c8"
, "sha256:262faf2cc4db81d3bcb526099b7dc33069b24f4028a9a23d46edca2493077ce0"
, "sha256:ac07dba5e07787c2a10edc3f8d8052f38cb5bec6767520bbab4289cb55b3a3f4"
, "sha256:c781557b490e1e8ff2132af386abe2a9c2d3cb66df06ee2cbd489d869432328a"
, "sha256:ff275e52e374819094e8035459820bf8e5fc42f287f603b445a8aee7aba2b689"
, "sha256:ffd859ffb35598eeec1283f3ccb3633f2798c042343425f635d616633cf63c2b"
]
testDigest =
let digest = mkHash
digestStr = showSHA digest
in toDigest (C8.pack digestStr) @?= (Just digest)
testBadDigest1 = toDigest "ffd859ffb35598eeec1283f3ccb3633f2798c042343425f635d616633cf63c2b" @?= Nothing
testBadDigest2 = toDigest "ffd859ffb35598eeec1283f3corrupt?" @?= Nothing
testChainID =
let digest = mkHash
in (show $ ChainID digest) @?= showSHA digest
testDiffID =
let digest = mkHash
in (show $ DiffID digest) @?= showSHA digest
testChainIDGeneration =
let chainIDs = squishMaybe $ chainIDSequence diffIds
in chainIDs @?= preComputedChainIds

View File

@ -0,0 +1,74 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Image.V1_2
-- Copyright : (C) 2016 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Tests.Data.Docker.Image.V1_2 where
import Data.Aeson
import qualified Data.ByteString.Lazy.Char8 as C8L
import Data.Docker.Image.V1_2.Types
import Data.HashMap.Strict as H
import Test.Tasty
import Test.Tasty.HUnit
import Lib
-----------------------------------------------------------------------------
--
unitTests = testGroup "V1.2 Image Tests"
[ testCase "ImageManifest golden encoding" testImageManifestGoldenEncoding
, testCase "ImageManifest two-way encoding" testImageManifestTwoWayEncoding
, testCase "ImageRepositories golden encoding" testImageRepositoriesGoldenEncoding
, testCase "ImageRepositories two-way encoding" testImageRepositoriesTwoWayEncoding
]
-----------------------------------------------------------------------------
-- TESTS
testImageManifestGoldenEncoding =
let goldenStr = "[{\"Config\":\"3e83c23dba6a16cd936a3dc044df71b26706c5a4c28181bc3ca4a4af9f5f38ee.json\",\"Layers\":[\"10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9/layer.tar\"],\"RepoTags\":[\"library/debian:jessie\"]}]"
imgManifest = [ImageManifest
"3e83c23dba6a16cd936a3dc044df71b26706c5a4c28181bc3ca4a4af9f5f38ee.json"
[ "library/debian:jessie" ]
[ "10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9/layer.tar" ]
]
in (Lib.encodeCanonical imgManifest) @?= (C8L.pack goldenStr)
testImageManifestTwoWayEncoding =
let imgManifest = [ImageManifest
"3e83c23dba6a16cd936a3dc044df71b26706c5a4c28181bc3ca4a4af9f5f38ee.json"
[ "library/debian:jessie" ]
[ "10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9/layer.tar" ]
]
encoded = Lib.encodeCanonical imgManifest
in decode encoded @?= (Just imgManifest)
testImageRepositoriesGoldenEncoding =
let goldenStr = "{\"library/debian\":{\"jessie\":\"10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9\"}}"
imgRepos = ImageRepositories
[ImageRepo
"library/debian"
(H.singleton
"jessie"
"10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9")]
in (Lib.encodeCanonical imgRepos) @?= (C8L.pack goldenStr)
testImageRepositoriesTwoWayEncoding =
let imgRepos = ImageRepositories
[ImageRepo
"library/debian"
(H.singleton
"jessie"
"10a267c67f423630f3afe5e04bbbc93d578861ddcc54283526222f3ad5e895b9")]
encoded = Lib.encodeCanonical imgRepos
in decode encoded @?= (Just imgRepos)

View File

@ -0,0 +1,73 @@
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE RecordWildCards #-}
-----------------------------------------------------------------------------
-- |
-- Module : Data.Docker.Nix.FetchDocker
-- Copyright : (C) 2017 Awake Networks
-- License : AllRightsReserved
-- Maintainer : Awake Networks <opensource@awakenetworks.com>
-- Stability : stable
----------------------------------------------------------------------------
module Tests.Data.Docker.Nix.FetchDocker where
import Control.Exception as CE
import Control.Monad.Except as Except
import Data.ByteString.Lazy.Char8 as C8L
import Data.Either (either)
import qualified Data.Text as T
import Network.URI
import Test.Tasty
import Test.Tasty.Golden
import Test.Tasty.HUnit
import Text.PrettyPrint.ANSI.Leijen as Text.PrettyPrint (displayS)
import Data.Docker.Image.Types
import Data.Docker.Nix.FetchDocker as Nix.FetchDocker
import Data.Docker.Nix.Lib as Nix.Lib
import Lib
import Network.Wreq.Docker.Registry.V2 as Docker.Registry
import Types
import Types.ImageTag
tests = testGroup "FetchDocker Nix Generation Tests"
[ goldenVsString
"Golden vs. Generated `fetchDocker' Nix Expression"
"test/data/golden-debian:jessie.nix"
generateFetchDockerNix
, testCase "Base16 Digest to Base32 Digest" testBase16toBase32
]
testBase16toBase32 :: Assertion
testBase16toBase32 = do
let b16 = Base16Digest "5c90d4a2d1a8dfffd05ff2dd659923f0ca2d843b5e45d030e17abbcd06a11b5b"
b32 = Base32Digest "0nqvl43cvfvsw4qd0iay7f22vjph4fcnbpgjbz8gzpx8s6id942w"
res <- Except.runExceptT $ do
nixhash <- Lib.findExec "nix-hash"
Nix.Lib.toBase32Nix nixhash b16
either
(assertFailure . show)
(assertEqual "" b32)
res
generateFetchDockerNix :: IO C8L.ByteString
generateFetchDockerNix = do
manifest <- C8L.readFile "test/data/manifest-debian:jessie.json"
nixExpression <- Nix.FetchDocker.generate
HockerImageMeta
{ imageRepo = "library"
, imageName = "debian"
, imageTag = ImageTag "jessie"
, manifestJSON = manifest
, dockerRegistry = defaultRegistry
, altImageName = Nothing
}
either
(Lib.die . T.pack . show)
(return . C8L.pack . (flip displayS "") . Lib.renderNixExpr)
nixExpression

View File

@ -0,0 +1,20 @@
{
config.docker.images.debian = pkgs.fetchdocker {
name = "debian";
registry = "https://registry-1.docker.io/v2/";
repository = "library";
imageName = "debian";
tag = "jessie";
imageConfig = pkgs.fetchDockerConfig {
inherit registry repository imageName tag;
sha256 = "1rwinmvfc8jxn54y7qnj82acrc97y7xcnn22zaz67y76n4wbwjh5";
};
imageLayers = let
layer0 = pkgs.fetchDockerLayer {
inherit registry repository imageName tag;
layerDigest = "cd0a524342efac6edff500c17e625735bbe479c926439b263bbe3c8518a0849c";
sha256 = "1744l0c8ag5y7ck9nhr6r5wy9frmaxi7xh80ypgnxb7g891m42nd";
};
in [ layer0 ];
};
}

View File

@ -0,0 +1,16 @@
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 1528,
"digest": "sha256:054abe38b1e6f863befa4258cbfaf127b1cc9440d2e2e349b15d22e676b591e7"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 52550276,
"digest": "sha256:cd0a524342efac6edff500c17e625735bbe479c926439b263bbe3c8518a0849c"
}
]
}