mirror of
https://github.com/hasura/graphql-engine.git
synced 2024-12-13 19:33:55 +03:00
Use docker compose
, not docker-compose
.
Docker Compose is now a plugin for Docker, bundled by default in Docker Desktop and many Linux distribution packages. The standalone `docker-compose` binary has been deprecated since Docker Compose v2. Using the new version directly allows us to write development scripts that do not require `docker-compose` to be installed. PR-URL: https://github.com/hasura/graphql-engine-mono/pull/5185 GitOrigin-RevId: c8542b8b2405d1aa32288991688c6fde4af96383
This commit is contained in:
parent
59d8bc66cc
commit
349ccd3296
@ -17,6 +17,8 @@ If you are a first-time contributor, feel free to post your doubts/questions in
|
||||
|
||||
The following dependencies need to be installed in order to work on the CLI codebase:
|
||||
|
||||
- [Docker](https://www.docker.com/get-started/)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/)
|
||||
- [Go >= 1.16](https://golang.org/doc/install)
|
||||
- [Node.js >= 10.19.0 and npm >= 6.14.4](https://nodejs.org/en/download/)
|
||||
- [GNU Make](https://www.gnu.org/software/make/) (optional)
|
||||
@ -45,13 +47,12 @@ make deps
|
||||
|
||||
Once everything is installed and running, you can start working on the feature/fix that you picked up.
|
||||
|
||||
|
||||
### Run the server
|
||||
|
||||
[Docker Compose](https://github.com/hasura/graphql-engine/tree/stable/install-manifests/docker-compose) is the easiest way to run graphql-engine.
|
||||
[Our Docker Compose manifest](https://github.com/hasura/graphql-engine/tree/stable/install-manifests/docker-compose) is the easiest way to run graphql-engine.
|
||||
|
||||
1. From the `graphql-engine` directory, run `cd install-manifests/docker-compose`.
|
||||
2. From inside that directory, run `docker-compose up -d`.
|
||||
2. From inside that directory, run `docker compose up -d`.
|
||||
|
||||
The GraphQL endpoint will be at `https://localhost:8080/v1/graphql`.
|
||||
|
||||
|
@ -13,10 +13,12 @@ This is a multiplayer tic tac toe app that uses the following components:
|
||||
|
||||
## Docker deployment
|
||||
|
||||
To deploy all the services run the app using docker:
|
||||
To deploy all the services, run the app using Docker Compose:
|
||||
|
||||
```sh
|
||||
docker-compose up -d --build
|
||||
docker compose up -d --build
|
||||
```
|
||||
|
||||
(Use `docker-compose` if you are using Docker Compose v1.)
|
||||
|
||||
You can access your app at http://localhost:8000
|
||||
|
@ -6,7 +6,7 @@ services:
|
||||
reference-agent:
|
||||
image: "hasura/dc-reference-agent:${HASURA_VERSION}"
|
||||
# build: ./reference
|
||||
## NOTE: If you want to modify the reference-agent you can use the build config for docker-compose
|
||||
## NOTE: If you want to modify the reference-agent you can use the build config for Docker Compose
|
||||
|
||||
postgres:
|
||||
image: "postgres:13"
|
||||
|
@ -3,16 +3,16 @@
|
||||
#
|
||||
# Run the following to get started:
|
||||
#
|
||||
# docker-compose up -d
|
||||
# docker compose up -d
|
||||
#
|
||||
# That will start up services in the background. To take them down,
|
||||
# you have to run
|
||||
#
|
||||
# docker-compose down
|
||||
# docker compose down
|
||||
#
|
||||
# If you changed DB init scripts, then you should also run:
|
||||
#
|
||||
# docker-compose down --volumes
|
||||
# docker compose down --volumes
|
||||
#
|
||||
# That'll delete the volumes. Otherwise e.g. PostgreSQL will skip
|
||||
# initializing if a DB already exists.
|
||||
|
@ -13,7 +13,7 @@ This Docker Compose setup runs [Hasura GraphQL Engine](https://github.com/hasura
|
||||
- Map your domain name to this ip address
|
||||
- Edit `Caddyfile` and add your domain (replace `:80` with your domain to get automatic HTTPS certs from [LetsEncrypt](https://letsencrypt.org/))
|
||||
- Edit `docker-compose.yaml` and change `HASURA_GRAPHQL_ADMIN_SECRET` to something secure
|
||||
- `docker-compose up -d`
|
||||
- `docker compose up -d`
|
||||
|
||||
GraphQL endpoint will be `https://<your-domain.com>/v1/graphql`
|
||||
Console will be available on `https://<your-domain.com>/console`
|
||||
|
@ -15,7 +15,7 @@ This Docker Compose setup runs [Hasura GraphQL Engine](https://github.com/hasura
|
||||
- **PGADMIN_DEFAULT_PASSWORD:** `admin`
|
||||
- Read more `Environment Variables` here: https://hub.docker.com/r/dpage/pgadmin4/
|
||||
- Edit `docker-compose.yaml` and change `HASURA_GRAPHQL_ADMIN_SECRET` to something secure
|
||||
- `docker-compose up -d`
|
||||
- `docker compose up -d`
|
||||
- Navigate to `http://localhost:5050`, login and add a new server with the following parameters:
|
||||
General - Name: Hasura
|
||||
Connection - Host: `postgres`
|
||||
|
@ -12,7 +12,7 @@ See [this blog post for a tutorial](https://hasura.io/blog/graphql-and-geo-locat
|
||||
## Usage
|
||||
|
||||
- Clone this repo on a machine where you'd like to deploy graphql engine
|
||||
- `docker-compose up -d`
|
||||
- `docker compose up -d`
|
||||
|
||||
GraphQL endpoint will be `https://<your-domain.com>/v1/graphql`
|
||||
Console will be available on `https://<your-domain.com>/console`
|
||||
|
@ -1,24 +0,0 @@
|
||||
# Hasura GraphQL Engine on Docker
|
||||
|
||||
This Docker Compose setup runs [Hasura GraphQL Engine](https://github.com/hasura/graphql-engine) along with Postgres using `docker-compose`.
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
- [Docker](https://docs.docker.com/install/)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/)
|
||||
|
||||
## Usage
|
||||
|
||||
- Clone this repo on a machine where you'd like to deploy graphql engine
|
||||
- `docker-compose up -d`
|
||||
|
||||
GraphQL endpoint will be `https://<your-domain.com>/v1/graphql`
|
||||
Console will be available on `https://<your-domain.com>/console`
|
||||
|
||||
## Connecting to External Postgres
|
||||
|
||||
If you want to connect to an external/existing postgres database, replace `HASURA_GRAPHQL_DATABASE_URL` in `docker-compose.yaml` with your database url.
|
||||
|
||||
**Note: localhost will resolve to the container ip inside a docker container, not the host ip**
|
||||
|
||||
|
@ -1,31 +0,0 @@
|
||||
version: '3.6'
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:12
|
||||
restart: always
|
||||
volumes:
|
||||
- db_data:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_PASSWORD: postgrespassword
|
||||
graphql-engine:
|
||||
image: hasura/graphql-engine:v2.2.0
|
||||
ports:
|
||||
- "8080:8080"
|
||||
depends_on:
|
||||
- "postgres"
|
||||
restart: always
|
||||
environment:
|
||||
## postgres database to store Hasura metadata
|
||||
HASURA_GRAPHQL_METADATA_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
|
||||
## this env var can be used to add the above postgres database to Hasura as a data source. this can be removed/updated based on your needs
|
||||
PG_DATABASE_URL: postgres://postgres:postgrespassword@postgres:5432/postgres
|
||||
## enable the console served by server
|
||||
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
|
||||
## enable debugging mode. It is recommended to disable this in production
|
||||
HASURA_GRAPHQL_DEV_MODE: "true"
|
||||
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
|
||||
## uncomment next line to set an admin secret
|
||||
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
|
||||
volumes:
|
||||
db_data:
|
||||
|
@ -10,15 +10,13 @@ This Docker Compose setup runs [Hasura GraphQL Engine](https://github.com/hasura
|
||||
## Usage
|
||||
|
||||
- Clone this repo on a machine where you'd like to deploy graphql engine
|
||||
- `docker-compose up -d`
|
||||
- `docker compose up -d`
|
||||
|
||||
GraphQL endpoint will be `https://<your-domain.com>/v1/graphql`
|
||||
Console will be available on `https://<your-domain.com>/console`
|
||||
|
||||
## Connecting to External Postgres
|
||||
|
||||
If you want to connect to an external/existing postgres database, replace `HASURA_GRAPHQL_DATABASE_URL` in `docker-compose.yaml` with your database url.
|
||||
If you want to connect to an external/existing postgres database, replace `HASURA_GRAPHQL_DATABASE_URL` in `docker-compose.yaml` with your database url.
|
||||
|
||||
**Note: localhost will resolve to the container ip inside a docker container, not the host ip**
|
||||
|
||||
|
||||
|
@ -325,17 +325,17 @@ if [ "$MODE" = "graphql-engine" ]; then
|
||||
echo_pretty " $ $0 postgres"
|
||||
echo_pretty ""
|
||||
|
||||
RUN_INVOCATION=(cabal new-run --project-file=cabal/dev-sh.project --RTS --
|
||||
RUN_INVOCATION=(cabal new-run --RTS --
|
||||
exe:graphql-engine +RTS -N -T -s -RTS serve
|
||||
--enable-console --console-assets-dir "$PROJECT_ROOT/console/static/dist"
|
||||
)
|
||||
|
||||
echo_pretty 'About to do:'
|
||||
echo_pretty ' $ cabal new-build --project-file=cabal/dev-sh.project exe:graphql-engine'
|
||||
echo_pretty ' $ cabal new-build exe:graphql-engine'
|
||||
echo_pretty " $ ${RUN_INVOCATION[*]}"
|
||||
echo_pretty ''
|
||||
|
||||
cabal new-build --project-file=cabal/dev-sh.project exe:graphql-engine
|
||||
cabal new-build exe:graphql-engine
|
||||
|
||||
# We assume a PG is *already running*, and therefore bypass the
|
||||
# cleanup mechanism previously set.
|
||||
@ -483,7 +483,6 @@ elif [ "$MODE" = "test" ]; then
|
||||
# seems to conflict now, causing re-linking, haddock runs, etc. Instead do a
|
||||
# `graphql-engine version` to trigger build
|
||||
cabal run \
|
||||
--project-file=cabal/dev-sh.project \
|
||||
-- exe:graphql-engine \
|
||||
--metadata-database-url="$PG_DB_URL" \
|
||||
version
|
||||
@ -501,7 +500,6 @@ elif [ "$MODE" = "test" ]; then
|
||||
HASURA_GRAPHQL_DATABASE_URL="$PG_DB_URL" \
|
||||
HASURA_MSSQL_CONN_STR="$MSSQL_CONN_STR" \
|
||||
cabal run \
|
||||
--project-file=cabal/dev-sh.project \
|
||||
test:graphql-engine-tests \
|
||||
-- "${UNIT_TEST_ARGS[@]}"
|
||||
fi
|
||||
@ -529,7 +527,6 @@ elif [ "$MODE" = "test" ]; then
|
||||
# Using --metadata-database-url flag to test multiple backends
|
||||
# HASURA_GRAPHQL_PG_SOURCE_URL_* For a couple multi-source pytests:
|
||||
cabal new-run \
|
||||
--project-file=cabal/dev-sh.project \
|
||||
-- exe:graphql-engine \
|
||||
--metadata-database-url="$PG_DB_URL" serve \
|
||||
--stringify-numeric-types \
|
||||
|
@ -20,16 +20,6 @@ CITUS_PORT = 65004
|
||||
# function from util.sh (or anywhere else).
|
||||
DB_UTILS = source ./.buildkite/scripts/util/util.sh;
|
||||
|
||||
ifneq ($(shell command -v docker-compose),)
|
||||
DOCKER_COMPOSE = docker-compose
|
||||
else
|
||||
ifneq ($(shell command -v nix),)
|
||||
DOCKER_COMPOSE = nix run nixpkgs\#docker-compose --
|
||||
else
|
||||
DOCKER_COMPOSE = $(error "Could not find docker-compose.")
|
||||
endif
|
||||
endif
|
||||
|
||||
ifneq ($(shell command -v sqlcmd),)
|
||||
MSSQL_SQLCMD = sqlcmd
|
||||
MSSQL_SQLCMD_PORT = $(MSSQL_PORT)
|
||||
@ -38,7 +28,7 @@ ifneq ($(shell [[ -e /opt/mssql-tools/bin/sqlcmd ]] && echo true),)
|
||||
MSSQL_SQLCMD = /opt/mssql-tools/bin/sqlcmd
|
||||
MSSQL_SQLCMD_PORT = $(MSSQL_PORT)
|
||||
else
|
||||
MSSQL_SQLCMD = docker exec $(shell basename $(PWD))-sqlserver-1 sqlcmd
|
||||
MSSQL_SQLCMD = docker compose exec --no-TTY sqlserver sqlcmd
|
||||
MSSQL_SQLCMD_PORT = 1433
|
||||
endif
|
||||
endif
|
||||
@ -62,7 +52,7 @@ start-postgres: spawn-postgres wait-for-postgres
|
||||
|
||||
.PHONY: spawn-postgres
|
||||
spawn-postgres:
|
||||
$(DOCKER_COMPOSE) up -d postgres
|
||||
docker compose up -d postgres
|
||||
|
||||
.PHONY: wait-for-postgres
|
||||
wait-for-postgres:
|
||||
@ -75,7 +65,7 @@ start-citus: spawn-citus wait-for-citus
|
||||
|
||||
.PHONY: spawn-citus
|
||||
spawn-citus:
|
||||
$(DOCKER_COMPOSE) up -d citus
|
||||
docker compose up -d citus
|
||||
|
||||
.PHONY: wait-for-citus
|
||||
wait-for-citus:
|
||||
@ -88,7 +78,7 @@ start-sqlserver: spawn-sqlserver wait-for-sqlserver
|
||||
|
||||
.PHONY: spawn-sqlserver
|
||||
spawn-sqlserver:
|
||||
$(DOCKER_COMPOSE) up -d sqlserver
|
||||
docker compose up -d sqlserver
|
||||
|
||||
.PHONY: wait-for-sqlserver
|
||||
wait-for-sqlserver:
|
||||
@ -101,7 +91,7 @@ start-mysql: spawn-mysql wait-for-mysql
|
||||
|
||||
.PHONY: spawn-mysql
|
||||
spawn-mysql:
|
||||
$(DOCKER_COMPOSE) up -d mariadb
|
||||
docker compose up -d mariadb
|
||||
|
||||
.PHONY: wait-for-mysql
|
||||
wait-for-mysql:
|
||||
@ -113,7 +103,7 @@ start-dc-reference-agent: spawn-dc-reference-agent wait-for-dc-reference-agent
|
||||
|
||||
.PHONY: spawn-dc-reference-agent
|
||||
spawn-dc-reference-agent:
|
||||
$(DOCKER_COMPOSE) up -d dc-reference-agent
|
||||
docker compose up -d dc-reference-agent
|
||||
|
||||
# This target is probably unncessary, but there to follow the pattern.
|
||||
.PHONY: wait-for-dc-reference-agent
|
||||
@ -129,7 +119,7 @@ start-backends: \
|
||||
## stop-everything: tear down test databases
|
||||
stop-everything:
|
||||
# stop docker
|
||||
$(DOCKER_COMPOSE) down -v
|
||||
docker compose down -v
|
||||
|
||||
.PHONY: remove-tix-file
|
||||
remove-tix-file:
|
||||
|
@ -15,6 +15,8 @@ own machine and how to contribute.
|
||||
|
||||
For building console and running test suite:
|
||||
|
||||
- [Docker](https://www.docker.com/get-started/)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/)
|
||||
- [Node.js](https://nodejs.org/en/) (v12+, it is recommended that you use `node` with version `v12.x.x` A.K.A `erbium` or version `14.x.x` A.K.A `Fermium`)
|
||||
- npm >= 5.7
|
||||
- python >= 3.5 with pip3 and virtualenv
|
||||
@ -190,7 +192,7 @@ HASURA_GRAPHQL_DATABASE_URL='postgres://<user>:<password>@<host>:<port>/<dbname>
|
||||
1. To run the Haskell integration test suite, you'll first need to bring up the database containers:
|
||||
|
||||
```sh
|
||||
docker-compose up
|
||||
docker compose up
|
||||
```
|
||||
|
||||
2. Once the containers are up, you can run the test suite via
|
||||
|
@ -81,10 +81,9 @@ _Note to Hasura team: a service account is already setup for internal use, pleas
|
||||
|
||||
This will start up Postgres, SQL Server, Citus, MariaDB and the Hasura Data Connectors' reference agent.
|
||||
|
||||
|
||||
> __Note__: on ARM64 architecture we'll need additional steps in order to test mssql properly.
|
||||
> See [`SQLServer` failures on Apple M1 chips](#sqlserver-failures-on-apple-m1-chips)
|
||||
> for more details.
|
||||
> **Note**: on ARM64 architecture we'll need additional steps in order to test mssql properly.
|
||||
> See [`SQLServer` failures on Apple M1 chips](#sqlserver-failures-on-apple-m1-chips)
|
||||
> for more details.
|
||||
|
||||
2. Once the containers are up, you can run the test suite via
|
||||
|
||||
@ -92,19 +91,19 @@ _Note to Hasura team: a service account is already setup for internal use, pleas
|
||||
cabal run tests-hspec
|
||||
```
|
||||
|
||||
You can also further refine which tests to run using the `-m` flag:
|
||||
You can also further refine which tests to run using the `-m` flag:
|
||||
|
||||
```sh
|
||||
cabal run tests-hspec -- -m "SQLServer"
|
||||
```
|
||||
|
||||
For additional information, consult the help section:
|
||||
For additional information, consult the help section:
|
||||
|
||||
```sh
|
||||
cabal run tests-hspec -- --help
|
||||
```
|
||||
|
||||
3. The local databases persist even after shutting down docker-compose.
|
||||
3. The local databases persist even after shutting down the containers.
|
||||
If this is undesirable, delete the databases using the following command:
|
||||
|
||||
```sh
|
||||
@ -159,12 +158,13 @@ consistently across different backends.
|
||||
running test trees in terms of a list of `Context a`s.
|
||||
|
||||
Each `Context a` requires:
|
||||
|
||||
- a unique `name`, of type `ContextName`
|
||||
- a `mkLocalTestEnvironment` action, of type `TestEnvironment -> IO a`
|
||||
- a `setup` action, of type `(TestEnvironment, a) -> IO ()`
|
||||
- a `teardown` action, of type `(TestEnvironment, a) -> IO ()`
|
||||
- an `customOptions` parameter, which will be threaded through the
|
||||
tests themselves to modify behavior for a particular `Context`
|
||||
tests themselves to modify behavior for a particular `Context`
|
||||
|
||||
Of these two functions, whether one wishes to use `Harness.Test.Context.run` or
|
||||
`Harness.Test.Context.runWithLocalTestEnvironment` will depend on if their test can be
|
||||
@ -235,8 +235,7 @@ backend using `Backend.<backend>.run_`.
|
||||
|
||||
### Writing tests
|
||||
|
||||
Test should be written (or reachable from) `tests :: SpecWith TestEnvironment`, or `tests
|
||||
:: SpecWith (TestEnvironment, Foo)` for tests that use an additional local state.
|
||||
Test should be written (or reachable from) `tests :: SpecWith TestEnvironment`, or `tests :: SpecWith (TestEnvironment, Foo)` for tests that use an additional local state.
|
||||
|
||||
A typical test will look similar to this:
|
||||
|
||||
@ -268,7 +267,7 @@ data:
|
||||
- Runs a POST request against graphql-engine which can be specified using the `graphql` quasi-quoter.
|
||||
- Compares the response to an expected result which can be specified using the `yaml` quasi-quoter.
|
||||
|
||||
__Note__: these quasi-quoter can also perform string interpolation. See the relevant modules
|
||||
**Note**: these quasi-quoter can also perform string interpolation. See the relevant modules
|
||||
under the [Harness.Quoter](Harness/Quoter) namespace.
|
||||
|
||||
## Debugging
|
||||
@ -279,7 +278,7 @@ database. The default behavior of the test suite is to drop all the
|
||||
data and the tables onces the test suite finishes. To prevent that,
|
||||
you can modify your test module to prevent teardown. Example:
|
||||
|
||||
``` diff
|
||||
```diff
|
||||
spec :: SpecWith TestEnvironment
|
||||
spec =
|
||||
Context.run
|
||||
@ -341,39 +340,42 @@ Citus
|
||||
|
||||
Points to note:
|
||||
|
||||
* `SpecHook.setupTestEnvironment` starts the HGE server, and its url is revealed by `instance Show TestEnvironment`.
|
||||
* `SpecHook.teardownTestEnvironment` stops it again.
|
||||
* This is a good idea to do before issuing the `:reload` command, because
|
||||
- `SpecHook.setupTestEnvironment` starts the HGE server, and its url is revealed by `instance Show TestEnvironment`.
|
||||
- `SpecHook.teardownTestEnvironment` stops it again.
|
||||
- This is a good idea to do before issuing the `:reload` command, because
|
||||
reloading loses the `te` reference but leaves the thread running!
|
||||
* `Context.contextRepl` runs the setup action of a given `Context` and returns a
|
||||
- `Context.contextRepl` runs the setup action of a given `Context` and returns a
|
||||
corresponding teardown action.
|
||||
* After running this you can interact with the HGE console in the same state
|
||||
- After running this you can interact with the HGE console in the same state
|
||||
as when the tests are run.
|
||||
* Or you can run individual test `Example`s or `Spec`s.
|
||||
* To successfully debug/develop a test in the GHCI repl, the test module should:
|
||||
* define its `Context`s as toplevel values,
|
||||
* define its `Example`s as toplevel values,
|
||||
* ... such that they can be used directly in the repl.
|
||||
- Or you can run individual test `Example`s or `Spec`s.
|
||||
- To successfully debug/develop a test in the GHCI repl, the test module should:
|
||||
- define its `Context`s as toplevel values,
|
||||
- define its `Example`s as toplevel values,
|
||||
- ... such that they can be used directly in the repl.
|
||||
|
||||
## Style guide
|
||||
|
||||
### Stick to [Simple Haskell](https://www.simplehaskell.org/)
|
||||
|
||||
This test suite should remain accessible to contributors who are new to Haskell and/or the GraphQL engine codebase. Consider the [power-to-weight](https://www.snoyman.com/blog/2019/11/boring-haskell-manifesto/#power-to-weight-ratio) ratio of features, language extensions or abstractions before you introduce them. For example, try to fully leverage Haskell '98 or 2010 features before making use of more advanced ones.
|
||||
|
||||
### Write small, atomic, autonomous specs
|
||||
|
||||
Small: Keep specs short and succinct wherever possible. Consider reorganising modules that grow much longer than ~200-300 lines of code.
|
||||
|
||||
*For example: The [`TestGraphQLQueryBasic*` pytest class](../tests-py/test_graphql_queries.py#L251) was ported to the hspec suite as separate `BasicFields`, `LimitOffset`, `Where`, `Ordering`, `Directives` and `Views` specs.*
|
||||
_For example: The [`TestGraphQLQueryBasic` pytest class](../tests-py/test_graphql_queries.py#L251) was ported to the hspec suite as separate `BasicFields`, `LimitOffset`, `Where`, `Ordering`, `Directives`and `Views` specs._
|
||||
|
||||
Atomic: Each spec should test only one feature against the backends (or contexts) that support it. Each module should contain only the context setup and teardown, and the tests themselves. The database schema, test data, and feature expectations should be easy to reason about without navigating to different module.
|
||||
|
||||
*For example: [`BasicFieldsSpec.hs`](Test/BasicFieldsSpec.hs)*
|
||||
_For example: [`BasicFieldsSpec.hs`](Test/BasicFieldsSpec.hs)_
|
||||
|
||||
Autonomous: Each test should run independently of other tests, and not be dependent on the results of a previous test. Shared test state, where unavoidable, should be made explicit.
|
||||
|
||||
*For example: [Remote relationship tests](Test/RemoteRelationship/) explicitly require shared state.*
|
||||
_For example: [Remote relationship tests](Test/RemoteRelationship/) explicitly require shared state._
|
||||
|
||||
### Use the `Harness.*` hierarchy for common functions
|
||||
|
||||
Avoid functions or types in tests, other than calls to the `Harness.*` API.
|
||||
|
||||
Any supporting code should be in the `Harness.*` hierarchy and apply broadly to the test suites overall.
|
||||
@ -381,6 +383,7 @@ Any supporting code should be in the `Harness.*` hierarchy and apply broadly to
|
||||
## Troubleshooting
|
||||
|
||||
### `Database 'hasura' already exists. Choose a different database name.` or `schema "hasura" does not exist`
|
||||
|
||||
This typically indicates persistent DB state between test runs. Try `docker compose down --volumes` to delete the DBs and restart the containers.
|
||||
|
||||
### General `DataConnector` failures
|
||||
@ -393,7 +396,6 @@ We have a few problems with SQLServer on M1:
|
||||
|
||||
1. Compiler bug in GHC 8.10.7 on M1.
|
||||
|
||||
|
||||
Due to compiler bugs in GHC 8.10.7 we need to use patched Haskell ODBC bindings as a workaround for M1 systems.
|
||||
Make the following change in the `cabal.project`:
|
||||
|
||||
@ -407,15 +409,15 @@ We have a few problems with SQLServer on M1:
|
||||
```
|
||||
|
||||
2. Microsoft did not release SQL Server for M1. We need to use Azure SQL Edge instead.
|
||||
|
||||
|
||||
Switch the docker image in `docker-compose/sqlserver/Dockerfile` to `azure-sql-edge`:
|
||||
|
||||
|
||||
```diff
|
||||
- FROM mcr.microsoft.com/mssql/server:2019-latest@sha256:a098c9ff6fbb8e1c9608ad7511fa42dba8d22e0d50b48302761717840ccc26af
|
||||
+ FROM mcr.microsoft.com/azure-sql-edge
|
||||
```
|
||||
|
||||
Note: you might need to rebuild docker-compose with `docker compose build`
|
||||
Note: you might need to rebuild the Docker images with `docker compose build`
|
||||
|
||||
3. Azure SQL Edge does not ship with the `sqlcmd` utility with which we use to setup the SQL Server schema.
|
||||
|
||||
@ -427,4 +429,3 @@ We have a few problems with SQLServer on M1:
|
||||
- docker compose up
|
||||
+ docker compose up & (cd docker-compose/sqlserver/ && ./run-init.sh 65003) && fg
|
||||
```
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user