chore(server): delete MySQL native backend

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9050
GitOrigin-RevId: 1515fb0efdb1baa05ffe3ff7cf6f230acd0cde29
This commit is contained in:
Daniel Harvey 2023-05-05 11:31:19 +01:00 committed by hasura-bot
parent 3f113802cb
commit 285a200a87
51 changed files with 18 additions and 5275 deletions

View File

@ -1,110 +0,0 @@
---
description: MySQL Preview (Deprecated)
keywords:
- hasura
- mysql
- preview
sidebar_position: 12
sidebar_label: MySQL Preview (Deprecated)
---
import Paperform from '@site/src/components/Paperform';
# MySQL Preview (Deprecated)
## Introduction
:::danger Deprecation Notice
This version of the Hasura MySQL implementation has been **deprecated**. Check out the latest implementation
[for Docker](/databases/mysql/docker.mdx) or [Hasura Cloud](/databases/mysql/cloud.mdx).
:::
## Try it out
MySQL support can be tried out using `docker compose` and an existing MySQL database as follows:
### Prerequisites
- [Docker](https://docs.docker.com/install/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- An existing MySQL database
### Step 1: Get the docker-compose file
Get the Hasura MySQL preview docker compose file:
```bash
# in a new directory run
wget https://raw.githubusercontent.com/hasura/graphql-engine/master/install-manifests/docker-compose-mysql-preview/docker-compose.yaml
# or run
curl https://raw.githubusercontent.com/hasura/graphql-engine/master/install-manifests/docker-compose-mysql-preview/docker-compose.yaml -o docker-compose.yaml
```
### Step 2: Update MySQL config
You need to now update the `docker-compose.yaml` file with credentials for your MySQL database _(replace the values
surrounded by \<\>)_
```yaml
...
command:
- graphql-engine
- --mysql-host
- <mysql-host>
- --mysql-user
- <mysql-user>
- --mysql-port
- <mysql-port>
- --mysql-dbname
- <mysql-dbname>
- --mysql-password
- <mysql-password>
...
```
Our [docker networking guide](/deployment/deployment-guides/docker.mdx#docker-networking) might be useful to set the
appropriate value for `mysql-host`. _(See Hasura to API)_
### Step 3: Run Hasura GraphQL Engine
The following command will run Hasura along with a Postgres database required for its functioning.
```bash
$ docker compose up -d
```
Check if the containers are running:
```bash
$ docker ps
CONTAINER ID IMAGE ... CREATED STATUS PORTS ...
097f58433a2b hasura/graphql-engine ... 1m ago Up 1m 8080->8080/tcp ...
b0b1aac0508d postgres ... 1m ago Up 1m 5432/tcp ...
```
### Step 4: Try out the GraphQL API
The GraphiQL on the Hasura Console available at `http://localhost:8080/console` can be used to try out the generated
GraphQL API.
**The Hasura Console currently does not support managing the MySQL database schema**. i.e. The `Data` section of the
console will not display the MySQL tables, etc. Hence the database schema needs to be managed externally as of now.
_(support for this is coming very soon)_
See the [source PR](https://github.com/hasura/graphql-engine/pull/5655) for more information on current limitations and
upcoming features.
## Keep up to date
If you'd like to stay informed about the status of MySQL support, subscribe here:
<Paperform formId="hf-my-sql-preview-status" />
## Give us feedback
We appreciate any feedback. Please open a new [GitHub discussion](https://github.com/hasura/graphql-engine/discussions),
and we can discuss there.

View File

@ -51,13 +51,6 @@ schema.
:::
:::warning Deprecation Notice
The previous [MySQL preview implementation](/databases/mysql/deprecated/index.mdx) has been deprecated in favor of this
new implementation built using our [GraphQL Data Connectors](https://hasura.io/blog/hasura-graphql-data-connectors/).
:::
## Coming soon for MySQL
- [Subscriptions](/subscriptions/overview.mdx)

File diff suppressed because it is too large Load Diff

View File

@ -13,7 +13,6 @@
pcre \
unixodbc \
libpq \
mysql-client@5.7 \
libffi \
microsoft/mssql-release/mssql-tools18 \
direnv \
@ -27,7 +26,6 @@
echo 'export PATH="/opt/homebrew/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin:$PATH"' >> ~/.zshrc
echo 'export PATH="/opt/homebrew/opt/openssl@1.1/bin:$PATH"' >> ~/.zshrc
echo 'export PATH="/opt/homebrew/opt/node@16/bin:$PATH"' >> ~/.zshrc
echo 'export PATH="/opt/homebrew/opt/mysql-client@5.7/bin:$PATH"' >> ~/.zshrc
echo 'export PATH="/opt/homebrew/opt/libpq/bin:$PATH"' >> ~/.zshrc
```
@ -62,14 +60,6 @@ If you are re-running this command to update your Mac, you may need to run
5. Append lines below to `cabal/dev-sh.project.local` to allow Cabal (the Haskell build tool) to find the C dependencies you installed earlier (remembering to replace `/opt/homebrew` with your brew prefix if different):
```sh
package mysql
extra-include-dirs:
/opt/homebrew/opt/openssl/include
/opt/homebrew/opt/mysql-client@5.7/include
extra-lib-dirs:
/opt/homebrew/opt/openssl/lib
/opt/homebrew/opt/mysql-client@5.7/lib
package odbc
extra-include-dirs: /opt/homebrew/opt/unixodbc/include
extra-lib-dirs: /opt/homebrew/opt/unixodbc/lib

View File

@ -39,7 +39,7 @@ it for readability.
## Project overview
The goal of this project is for the GraphQL engine to be able to support multiple backends that is, to expose in our GraphQL schema tables that are not only stored in Postgres, but also MySQL, MSSQL, MongoDB, and has many others as we need. This poses several challenges, such as:
The goal of this project is for the GraphQL engine to be able to support multiple backends that is, to expose in our GraphQL schema tables that are not only stored in Postgres, but also MSSQL, MongoDB, and has many others as we need. This poses several challenges, such as:
- the codebase was originally written with the assumption that the underlying database is Postgres;
- different backends support a different set of features, often incompatible or different in subtle ways.

View File

@ -122,7 +122,6 @@ extra-source-files:
src-rsr/mssql/mssql_table_metadata.sql
src-rsr/mssql/mssql_unlock_events.sql.shakespeare
src-rsr/mssql/mssql_update_trigger.sql.shakespeare
src-rsr/mysql_table_metadata.sql
src-rsr/pg_function_metadata.sql
src-rsr/pg_source_migrations/0_to_1.sql
src-rsr/pg_source_migrations/1_to_2.sql
@ -475,9 +474,6 @@ common lib-depends
, data-default-class
, x509-system
, tagged
-- mysql
, mysql
, mysql-simple
-- dependency of vendored 'ip':
, wide-word
@ -634,26 +630,6 @@ library
, Hasura.Backends.Postgres.Types.Table
, Hasura.Backends.Postgres.Types.Update
, Hasura.Backends.MySQL.DataLoader.Execute
, Hasura.Backends.MySQL.DataLoader.Plan
, Hasura.Backends.MySQL.Types
, Hasura.Backends.MySQL.Types.Internal
, Hasura.Backends.MySQL.Types.Instances
, Hasura.Backends.MySQL.Plan
, Hasura.Backends.MySQL.FromIr
, Hasura.Backends.MySQL.Connection
, Hasura.Backends.MySQL.Meta
, Hasura.Backends.MySQL.Instances.Types
, Hasura.Backends.MySQL.Instances.Metadata
, Hasura.Backends.MySQL.Instances.Schema
, Hasura.Backends.MySQL.Instances.SchemaCache
, Hasura.Backends.MySQL.Instances.Execute
, Hasura.Backends.MySQL.Instances.Transport
, Hasura.Backends.MySQL.Schema.Introspection
, Hasura.Backends.MySQL.SQL
, Hasura.Backends.MySQL.ToQuery
, Hasura.Backends.MySQL.Instances.API
-- GraphQL Data Connector
, Hasura.Backends.DataConnector.Adapter.API
, Hasura.Backends.DataConnector.Adapter.Backend
@ -1189,8 +1165,6 @@ test-suite graphql-engine-tests
Hasura.Backends.DataConnector.API.V0.ScalarSpec
Hasura.Backends.DataConnector.API.V0.SchemaSpec
Hasura.Backends.DataConnector.API.V0.TableSpec
Hasura.Backends.MySQL.DataLoader.ExecuteTests
Hasura.Backends.MySQL.TypesSpec
Hasura.Backends.Postgres.Connection.VersionCheckSpec
Hasura.Backends.Postgres.Execute.PrepareSpec
Hasura.Backends.Postgres.NativeQueries.NativeQueriesSpec

View File

@ -324,10 +324,6 @@ schemaCrudTests = describe "A series of actions to setup and teardown a source w
kind: bigquery
display_name: bigquery
available: true
- builtin: true
kind: mysql
display_name: mysql
available: true
- builtin: false
display_name: "FOOBARDB"
kind: foobar

View File

@ -23,9 +23,6 @@ import Test.Hspec (SpecWith, describe, it)
spec :: SpecWith GlobalTestEnvironment
spec = do
-- TODO: this test causes an internal server error for MySQL, even if we add
-- "SERIAL" as the 'Schema.defaultSerialType' for MySQL.
Fixture.run
( NE.fromList
[ (Fixture.fixture $ Fixture.Backend Postgres.backendTypeMetadata)

View File

@ -29,15 +29,7 @@ spec :: SpecWith GlobalTestEnvironment
spec = do
Fixture.run
( NE.fromList
[ -- Create table fails currently becasuse we postfix table names for some reason
-- which makes the valid table name go over the limit
--
-- (Fixture.fixture $ Fixture.Backend Fixture.MySQL)
-- { Fixture.setupTeardown = \(testEnv, _) ->
-- [ Mysql.setupTablesAction schema testEnv
-- ]
-- },
(Fixture.fixture $ Fixture.Backend Postgres.backendTypeMetadata)
[ (Fixture.fixture $ Fixture.Backend Postgres.backendTypeMetadata)
{ Fixture.setupTeardown = \(testEnv, _) ->
[ Postgres.setupTablesAction schema testEnv
]

View File

@ -3,7 +3,6 @@
-- |
-- Queries over object relationships between tables in the schema.
--
-- TODO: MySQL link when docs are released?
-- https://hasura.io/docs/latest/schema/postgres/table-relationships/index
-- https://hasura.io/docs/latest/schema/ms-sql-server/table-relationships/index
-- https://hasura.io/docs/latest/schema/bigquery/table-relationships/index/

View File

@ -45,6 +45,7 @@ library
, arrows-extra
, autodocodec
, base
, bytestring
, cron
, hasura-prelude
, hasura-extras

View File

@ -8,9 +8,12 @@ module Hasura.Base.Instances () where
import Autodocodec qualified as AC
import Control.Monad.Fix
import Data.Aeson qualified as J
import Data.ByteString (ByteString)
import Data.Fixed (Fixed (..))
import Data.OpenApi.Declare as D
import Data.Text qualified as T
import Data.Text.Encoding (decodeUtf8With, encodeUtf8)
import Data.Text.Encoding.Error (lenientDecode)
import Data.Text.Extended (ToTxt (toTxt))
import Data.Time (NominalDiffTime)
import Data.URL.Template qualified as UT
@ -114,6 +117,12 @@ instance J.ToJSON C.CronSchedule where
instance J.ToJSONKey Void
instance J.FromJSON ByteString where
parseJSON = J.withText "ByteString" (pure . encodeUtf8)
instance J.ToJSON ByteString where
toJSON = J.String . decodeUtf8With lenientDecode
--------------------------------------------------------------------------------
-- ODBC

View File

@ -354,7 +354,7 @@ runSetupActions logger acts = go acts []
data Fixture a = Fixture
{ -- | A name describing the given context.
--
-- e.g. @Postgres@ or @MySQL@
-- e.g. @Postgres@ or @BigQuery@
name :: FixtureName,
-- | Setup actions associated with creating a local testEnvironment for this
-- 'Fixture'; for example, starting remote servers.

View File

@ -3,7 +3,7 @@
{-# LANGUAGE CPP #-}
-- NOTE: This module previously used Template Haskell to generate its instances,
-- but additional restrictions on Template Haskell splices introduced in GHC 9.0 impose an ordering
-- on the generated instances that is difficult to satisfy (see ../MySQL/Types/Instances.hs).
-- on the generated instances that is difficult to satisfy
-- To avoid these difficulties, we now use CPP.
-- | MSSQL Types Instances

View File

@ -1,167 +0,0 @@
module Hasura.Backends.MySQL.Connection
( runJSONPathQuery,
resolveSourceConfig,
resolveDatabaseMetadata,
postDropSourceHook,
fetchAllRows,
runQueryYieldingRows,
withMySQLPool,
parseTextRows,
)
where
import Data.Aeson hiding (Result)
import Data.Aeson qualified as J
import Data.Aeson.Key qualified as K
import Data.Aeson.KeyMap qualified as KM
import Data.Aeson.Text (encodeToTextBuilder)
import Data.ByteString (ByteString)
import Data.HashMap.Strict.InsOrd qualified as InsOrdHashMap
import Data.Pool
import Data.Scientific (fromFloatDigits)
import Data.Text qualified as T
import Data.Text.Encoding (decodeUtf8)
import Data.Text.Lazy (toStrict)
import Data.Text.Lazy.Builder (toLazyText)
import Data.Vector (Vector)
import Data.Vector qualified as V
import Database.MySQL.Base
import Database.MySQL.Base.Types (Field (..))
import Database.MySQL.Simple.Result qualified as MySQL
import Hasura.Backends.MySQL.DataLoader.Plan qualified as DataLoaderPlan
import Hasura.Backends.MySQL.Meta (getMetadata)
import Hasura.Backends.MySQL.ToQuery (Query (..))
import Hasura.Backends.MySQL.Types
import Hasura.Base.Error
import Hasura.Prelude
import Hasura.RQL.Types.Backend (BackendConfig)
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.Source
import Hasura.RQL.Types.Table (TableEventTriggers)
resolveSourceConfig :: (MonadIO m) => SourceName -> ConnSourceConfig -> BackendSourceKind 'MySQL -> BackendConfig 'MySQL -> environment -> manager -> m (Either QErr SourceConfig)
resolveSourceConfig _name csc@ConnSourceConfig {_cscPoolSettings = ConnPoolSettings {..}, ..} _backendKind _backendConfig _env _manager = do
let connectInfo =
defaultConnectInfo
{ connectHost = T.unpack _cscHost,
connectPort = _cscPort,
connectUser = T.unpack _cscUser,
connectPassword = T.unpack _cscPassword,
connectDatabase = T.unpack _cscDatabase
}
runExceptT $
SourceConfig csc
<$> liftIO
( createPool
(connect connectInfo)
close
1
(fromIntegral _cscIdleTimeout)
(fromIntegral _cscMaxConnections)
)
resolveDatabaseMetadata :: (MonadIO m) => SourceConfig -> m (Either QErr (DBObjectsIntrospection 'MySQL))
resolveDatabaseMetadata SourceConfig {..} = runExceptT do
metadata <- liftIO $ withResource scConnectionPool (getMetadata scConfig)
pure $ DBObjectsIntrospection metadata mempty mempty
postDropSourceHook ::
(MonadIO m) =>
SourceConfig ->
TableEventTriggers 'MySQL ->
m ()
postDropSourceHook _ _ =
-- As of now, we do not add any Hasura related stuff to source DB hence
-- no need to clean things up.
pure ()
parseFieldResult :: Field -> Maybe ByteString -> Value
parseFieldResult f@Field {..} mBs =
case fieldType of
Long ->
let fvalue :: Double = MySQL.convert f mBs
in Number $ fromFloatDigits fvalue
VarString ->
let fvalue :: Text = MySQL.convert f mBs
in J.String fvalue
Blob ->
let fvalue :: Text = MySQL.convert f mBs
in J.String fvalue
DateTime -> maybe J.Null (J.String . decodeUtf8) mBs
_ -> error $ "parseResult: not implemented yet " <> show f <> " " <> show mBs
-- TODO: handle remaining cases
fieldsToAeson :: [Field] -> [[Maybe ByteString]] -> [Value]
fieldsToAeson column rows =
[ Object $
KM.fromList $
[ (K.fromText (decodeUtf8 (fieldName c))) .= (parseFieldResult c r)
| (c, r) <- (zip column row :: [(Field, Maybe ByteString)])
]
| row <- (rows :: [[Maybe ByteString]])
]
runJSONPathQuery ::
(MonadError QErr m, MonadIO m) =>
(Pool Connection) ->
Query ->
m Text
runJSONPathQuery pool (Query querySql) = do
result <- liftIO $
withResource pool $ \conn -> do
query conn querySql
result <- storeResult conn
fields <- fetchFields result
rows <- fetchAllRows result
pure $ fieldsToAeson fields rows
pure $ toStrict $ toLazyText $ encodeToTextBuilder $ toJSON result
-- | Used by the dataloader to produce rows of records. Those rows of
-- records are then manipulated by the dataloader to do Haskell-side
-- joins. Is a Vector of HashMaps the most efficient choice? A
-- pandas-style data frame could also be more efficient,
-- dependingly. However, this is a legible approach; efficiency
-- improvements can be added later.
parseAndCollectRows ::
[Field] ->
[[Maybe ByteString]] ->
Vector (InsOrdHashMap DataLoaderPlan.FieldName J.Value)
parseAndCollectRows columns rows =
V.fromList
[ InsOrdHashMap.fromList
[ (DataLoaderPlan.FieldName . decodeUtf8 . fieldName $ column, parseFieldResult column value)
| (column, value) <- zip columns row :: [(Field, Maybe ByteString)]
]
| row <- rows :: [[Maybe ByteString]]
]
-- | Run a query immediately and parse up the results into a vector.
runQueryYieldingRows ::
(MonadIO m) =>
Pool Connection ->
Query ->
m (Vector (InsOrdHashMap DataLoaderPlan.FieldName J.Value))
runQueryYieldingRows pool (Query querySql) = do
liftIO $
withResource pool $ \conn -> do
query conn querySql
result <- storeResult conn
fields <- fetchFields result
rows <- fetchAllRows result
pure (parseAndCollectRows fields rows)
fetchAllRows :: Result -> IO [[Maybe ByteString]]
fetchAllRows r = reverse <$> go [] r
where
go acc res =
fetchRow res >>= \case
[] -> pure acc
r' -> go (r' : acc) res
parseTextRows :: [Field] -> [[Maybe ByteString]] -> [[Text]]
parseTextRows columns rows = zipWith (\column row -> map (MySQL.convert column) row) columns rows
withMySQLPool :: (MonadIO m) => Pool Connection -> (Connection -> IO a) -> m a
withMySQLPool pool = liftIO . withResource pool

View File

@ -1,398 +0,0 @@
{-# LANGUAGE UndecidableInstances #-}
-- |
--
-- Execute the plan given from .Plan.
module Hasura.Backends.MySQL.DataLoader.Execute
( OutputValue (..),
RecordSet (..),
ExecuteProblem (..),
execute,
runExecute,
-- for testing
joinObjectRows,
leftObjectJoin,
)
where
import Control.Monad.IO.Class
import Data.Aeson hiding (Value)
import Data.Aeson qualified as J
import Data.Bifunctor
import Data.Foldable
import Data.Graph
import Data.HashMap.Strict.InsOrd qualified as InsOrdHashMap
import Data.IORef
import Data.Vector (Vector)
import Data.Vector qualified as V
import GHC.TypeLits qualified
import Hasura.Backends.MySQL.Connection (runQueryYieldingRows)
import Hasura.Backends.MySQL.DataLoader.Plan
( Action (..),
FieldName (..),
HeadAndTail (..),
Join
( joinFieldName,
joinRhsOffset,
joinRhsTop,
joinType,
leftRecordSet,
rightRecordSet
),
PlannedAction (..),
Ref,
selectQuery,
toFieldName,
)
import Hasura.Backends.MySQL.DataLoader.Plan qualified as DataLoaderPlan
import Hasura.Backends.MySQL.DataLoader.Plan qualified as Plan
import Hasura.Backends.MySQL.ToQuery (fromSelect, toQueryFlat)
import Hasura.Backends.MySQL.Types hiding
( FieldName,
ScalarValue,
selectWhere,
)
-- import Hasura.Backends.MySQL.Types qualified as MySQL
import Hasura.GraphQL.Parser ()
-- Brings an instance for Hashable (Vector a)...
import Hasura.Prelude hiding
( concatMap,
elem,
head,
map,
mapMaybe,
tail,
toList,
)
-- | A set of records produced by the database. These are joined
-- together. There are all sorts of optimizations possible here, from
-- using a matrix/flat vector, unboxed sums for Value, etc. Presently
-- we choose a naive implementation in the interest of getting other
-- work done.
data RecordSet = RecordSet
{ origin :: Maybe PlannedAction,
rows :: Vector (InsOrdHashMap FieldName OutputValue),
wantedFields :: Maybe [Text]
}
deriving (Show)
instance GHC.TypeLits.TypeError ('GHC.TypeLits.Text "Aeson loses key order, so you can't use this instance.") => ToJSON RecordSet where
toJSON RecordSet {} = error "RecordSet.toJSON: do not use."
-- | The read-only info. used by the Execute monad. Later, this IORef
-- may become either atomically modified or in an STM or MVar so that
-- jobs can be executed in parallel.
data ExecuteReader = ExecuteReader
{ recordSets :: IORef (InsOrdHashMap Ref RecordSet),
credentials :: SourceConfig
}
-- | Any problem encountered while executing the plan.
data ExecuteProblem
= GetJobDecodeProblem String
| CreateQueryJobDecodeProblem String
| JoinProblem ExecuteProblem
| UnsupportedJoinBug JoinType
| MissingRecordSetBug Ref
| BrokenJoinInvariant [DataLoaderPlan.FieldName]
deriving (Show)
-- | Execute monad; as queries are performed, the record sets are
-- stored in the map.
newtype Execute a = Execute
{unExecute :: ReaderT ExecuteReader (ExceptT ExecuteProblem IO) a}
deriving
( Functor,
Applicative,
Monad,
MonadReader ExecuteReader,
MonadIO,
MonadError ExecuteProblem
)
-- | A value outputted by this execute module in a record set.
data OutputValue
= ArrayOutputValue (Vector OutputValue)
| RecordOutputValue (InsOrdHashMap DataLoaderPlan.FieldName OutputValue)
| ScalarOutputValue J.Value -- TODO: switch to 'MySQL.Scalar...'?
| NullOutputValue
deriving (Show, Eq, Generic)
instance Hashable OutputValue
--------------------------------------------------------------------------------
-- Main entry points
-- | Using the config, run the execute action. Finally, resolve the
-- head-and-tail to a record set.
runExecute ::
MonadIO m =>
SourceConfig ->
HeadAndTail ->
Execute a ->
m (Either ExecuteProblem RecordSet)
runExecute credentials headAndTail action = do
recordSets <- liftIO (newIORef mempty)
liftIO $
runExceptT $
runReaderT
(unExecute (action >> getFinalRecordSet headAndTail))
(ExecuteReader {credentials, recordSets})
-- | Execute the forest of actions.
execute :: Forest PlannedAction -> Execute ()
execute = traverse_ (traverse_ executePlannedAction)
-- | Execute an action, then store its result in the ref assigned to it.
executePlannedAction :: PlannedAction -> Execute ()
executePlannedAction PlannedAction {ref, action} =
fetchRecordSetForAction action >>= saveRecordSet ref
-- | Fetch the record set for the given action.
fetchRecordSetForAction :: Action -> Execute RecordSet
fetchRecordSetForAction =
\case
SelectAction select -> do
recordSet <- do
SourceConfig {scConnectionPool} <- asks credentials
result <-
liftIO $
runExceptT $
runQueryYieldingRows
scConnectionPool
(toQueryFlat (fromSelect (selectQuery select)))
case result of
Left problem -> throwError (JoinProblem problem)
Right rows -> pure (makeRecordSet rows)
-- Update the wanted fields from the original select. This lets
-- the executor know which fields to include after performing a
-- join.
pure recordSet {wantedFields = Plan.selectWantedFields select}
JoinAction Plan.Join {joinType = joinType', joinFieldName = fieldName, ..} -> do
left <- getRecordSet leftRecordSet
right <- getRecordSet rightRecordSet
case joinType' of
ArrayJoin fields ->
leftArrayJoin
wantedFields
fieldName
(toFieldNames fields)
joinRhsTop
joinRhsOffset
left
right
`onLeft` (throwError . JoinProblem)
ObjectJoin fields ->
leftObjectJoin
wantedFields
fieldName
(toFieldNames fields)
left
right
`onLeft` (throwError . JoinProblem)
_ -> throwError (UnsupportedJoinBug joinType')
where
toFieldNames = fmap (bimap toFieldName toFieldName)
-- | Make a record set from a flat record from the DB.
makeRecordSet :: Vector (InsOrdHashMap FieldName J.Value) -> RecordSet
makeRecordSet rows =
RecordSet
{ origin = Nothing, -- No information for this yet, but will follow
-- up with a change for this later.
rows = fmap (fmap ScalarOutputValue) rows,
wantedFields = Nothing
}
saveRecordSet :: Ref -> RecordSet -> Execute ()
saveRecordSet ref recordSet = do
recordSetsRef <- asks recordSets
liftIO (modifyIORef' recordSetsRef (InsOrdHashMap.insert ref recordSet))
getRecordSet :: Ref -> Execute RecordSet
getRecordSet ref = do
recordSetsRef <- asks recordSets
hash <- liftIO (readIORef recordSetsRef)
InsOrdHashMap.lookup ref hash `onNothing` throwError (MissingRecordSetBug ref)
-- | See documentation for 'HeadAndTail'.
getFinalRecordSet :: HeadAndTail -> Execute RecordSet
getFinalRecordSet HeadAndTail {..} = do
headSet <- getRecordSet head
tailSet <-
if tail /= head
then getRecordSet tail
else pure headSet
pure
tailSet
{ rows =
fmap
( InsOrdHashMap.filterWithKey
( \(FieldName k) _ ->
all (elem k) (wantedFields headSet)
)
)
(rows tailSet)
}
{- WIP, it seems:
-- | Make an lhs_fk IN (rhs_fk1, rhs_fk2, ..) expression list.
makeRelationshipIn :: DataLoaderPlan.Relationship -> Execute [Expression]
makeRelationshipIn
DataLoaderPlan.Relationship
{ leftRecordSet,
joinType = _,
rightTable = _rightTable
} = do
RecordSet {rows = _rows} <- getRecordSet leftRecordSet
-- TODO: A follow-up PR will add IN(..) and will join on the join
-- fields for the left/right tables. It needs support from Types.hs.
pure []
where
_lookupField' k row =
case InsOrdHashMap.lookup k row of
Nothing -> Nothing
Just x -> Just x
-- | Will be used by makeRelationshipIn for forming lhs_fk IN (rhs_fk1, rhs_fk2, ..)
planFieldNameToQueryFieldName :: EntityAlias -> FieldName -> MySQL.FieldName
planFieldNameToQueryFieldName (EntityAlias fieldNameEntity) (FieldName fieldName) =
MySQL.FieldName {fNameEntity = fieldNameEntity, fName = fieldName}
-}
-- | Inefficient but clean left object join.
leftObjectJoin ::
Maybe [Text] ->
Text ->
[(DataLoaderPlan.FieldName, DataLoaderPlan.FieldName)] ->
RecordSet ->
RecordSet ->
Either ExecuteProblem RecordSet
leftObjectJoin wantedFields joinAlias joinFields left right = do
rows' <- fmap V.fromList . traverse makeRows . toList $ rows left
pure
RecordSet
{ origin = Nothing,
wantedFields = Nothing,
rows = rows'
}
where
makeRows :: InsOrdHashMap FieldName OutputValue -> Either ExecuteProblem (InsOrdHashMap FieldName OutputValue)
makeRows leftRow =
let rightRows =
V.fromList
[ rightRow
| not (null joinFields),
rightRow <- toList (rows right),
all
( \(rightField, leftField) ->
Just True
== ( do
leftValue <- InsOrdHashMap.lookup leftField leftRow
rightValue <- InsOrdHashMap.lookup rightField rightRow
pure (leftValue == rightValue)
)
)
joinFields
]
in -- The line below will return Left is rightRows has more than one element.
-- Consider moving the check here if it makes sense in the future.
joinObjectRows wantedFields joinAlias leftRow rightRows
-- | A naive, exponential reference implementation of a left join. It
-- serves as a trivial sample implementation for correctness checking
-- of more efficient ones.
leftArrayJoin ::
Maybe [Text] ->
Text ->
[(DataLoaderPlan.FieldName, DataLoaderPlan.FieldName)] ->
Top ->
Maybe Int ->
RecordSet ->
RecordSet ->
Either ExecuteProblem RecordSet
leftArrayJoin wantedFields joinAlias joinFields rhsTop rhsOffset left right =
pure
RecordSet
{ origin = Nothing,
wantedFields = Nothing,
rows =
V.fromList
[ joinArrayRows wantedFields joinAlias leftRow rightRows
| leftRow <- toList (rows left),
let rightRows =
V.fromList
( limit
( offset
[ rightRow
| not (null joinFields),
rightRow <- toList (rows right),
all
( \(rightField, leftField) ->
Just True
== ( do
leftValue <- InsOrdHashMap.lookup leftField leftRow
rightValue <- InsOrdHashMap.lookup rightField rightRow
pure (leftValue == rightValue)
)
)
joinFields
]
)
)
]
}
where
offset = maybe id drop rhsOffset
limit =
case rhsTop of
NoTop -> id
Top n -> take n
-- | Join a row with another as an array join.
joinArrayRows ::
Maybe [Text] ->
Text ->
InsOrdHashMap DataLoaderPlan.FieldName OutputValue ->
Vector (InsOrdHashMap DataLoaderPlan.FieldName OutputValue) ->
InsOrdHashMap DataLoaderPlan.FieldName OutputValue
joinArrayRows wantedFields fieldName leftRow rightRow =
InsOrdHashMap.insert
(DataLoaderPlan.FieldName fieldName)
( ArrayOutputValue
( fmap
( RecordOutputValue
. InsOrdHashMap.filterWithKey
( \(DataLoaderPlan.FieldName k) _ ->
all (elem k) wantedFields
)
)
rightRow
)
)
leftRow
-- | Join a row with another as an object join.
--
-- If rightRow is not a single row, we throw 'BrokenJoinInvariant'.
joinObjectRows ::
Maybe [Text] ->
Text ->
InsOrdHashMap DataLoaderPlan.FieldName OutputValue ->
Vector (InsOrdHashMap DataLoaderPlan.FieldName OutputValue) ->
Either ExecuteProblem (InsOrdHashMap DataLoaderPlan.FieldName OutputValue)
joinObjectRows wantedFields fieldName leftRow rightRows
| V.length rightRows /= 1 = Left . BrokenJoinInvariant . foldMap InsOrdHashMap.keys $ rightRows
| otherwise =
let row = V.head rightRows
in pure $
InsOrdHashMap.insert
(DataLoaderPlan.FieldName fieldName)
( RecordOutputValue
( InsOrdHashMap.filterWithKey
(\(DataLoaderPlan.FieldName k) _ -> all (elem k) wantedFields)
row
)
)
leftRow

View File

@ -1,315 +0,0 @@
{-# LANGUAGE DuplicateRecordFields #-}
-- | Make a plan for the data loader to execute (.Execute).
--
-- It will produce a graph of actions, to be executed by .Execute.
module Hasura.Backends.MySQL.DataLoader.Plan
( Ref,
PlannedAction (..),
Action (..),
Select (..),
Join (..),
Relationship (..),
FieldName (..),
HeadAndTail (..),
toFieldName,
runPlan,
planSelectHeadAndTail,
actionsForest,
selectQuery,
)
where
import Data.Aeson
import Data.Bifunctor
import Data.Graph
import Data.HashSet.InsOrd qualified as OSet
import Data.Sequence qualified as Seq
import Data.String
import Hasura.Backends.MySQL.Types qualified as MySQL
import Hasura.Prelude hiding (head, second, tail, tell)
--------------------------------------------------------------------------------
-- Types
-- | A reference to a result of loading a recordset from the database.
data Ref = Ref
{ -- | This index will be generated by the planner.
idx :: Int,
-- | A display name. The idx gives us uniqueness.
text :: Text
}
deriving (Show, Eq, Generic, Ord)
instance Hashable Ref
-- | A almost-the-same version of Select from Types.Internal, except
-- with some fields used for planning and executing.
data Select = Select
{ selectAggUnwrap :: Maybe Text,
selectFrom :: MySQL.From,
selectGroupBy :: [MySQL.FieldName],
selectHaskellJoins :: [MySQL.Join],
selectOrderBy :: Maybe (NonEmpty MySQL.OrderBy),
selectProjections :: [MySQL.Projection],
selectRelationship :: Maybe Relationship,
selectWhere :: MySQL.Where,
selectWantedFields :: Maybe [Text],
selectSqlOffset :: Maybe Int,
selectSqlTop :: MySQL.Top
}
deriving (Show)
-- | An join action.
data Join = Join
{ -- | Join this side...
leftRecordSet :: Ref,
-- | with this side.
rightRecordSet :: Ref,
-- | Join only the top N results. It's important that we do this
-- IN HASKELL, therefore this is not part of the generated SQL.
joinRhsTop :: MySQL.Top,
-- | Offset applied to the right-hand-side table.
joinRhsOffset :: Maybe Int,
-- | Type of relational join to do.
joinType :: MySQL.JoinType,
-- | Field name to return the join result as; e.g. "albums" for an
-- artist with an array relation of albums.
joinFieldName :: Text,
-- | The SQL queries may achieve the data using joining fields,
-- but those fields aren't supposed to be returned back to the
-- user. To avoid that, we explicitly specify which fields are
-- wanted from this join. E.g. "title" and "year", but not
-- artist_id which was used to Haskell-join the row with an
-- album_artist_id, or whatever.
wantedFields :: Maybe [Text]
}
deriving (Show)
-- | An action that the executor will perform. Either pull data from
-- the database directly via a select, or join two other actions'
-- record sets together.
data Action
= SelectAction Select
| JoinAction Join
deriving (Show)
-- | An action planned, with a unique reference. I.e. the @action@
-- performed yields a result stored at reference @ref@.
data PlannedAction = PlannedAction
{ ref :: Ref,
action :: Action
}
deriving (Show)
-- | A relationship lets the executor insert on-the-fly WHERE
-- fkey1=fkey2 for relationships. These can only be inserted
-- on-the-fly and aren't known at the time of planning, because the
-- keys come from the left-hand-side table for a join.
data Relationship = Relationship
{ leftRecordSet :: Ref,
rightTable :: MySQL.EntityAlias,
joinType :: MySQL.JoinType
}
deriving (Show)
-- | Just a wrapper to clarify some types. It's different from the
-- MySQL.FieldName because it doesn't care about schemas: schemas
-- aren't returned in recordsets from the database.
newtype FieldName
= FieldName Text
deriving (Show, Ord, Eq, Hashable, FromJSON, ToJSONKey, IsString)
-- | The reason for this is subtle. Read this documentation. For each
-- join on a select (see above, there is a list), we split that out
-- into three jobs:
--
-- 1. One job for the left hand side (i.e. the select).
-- 2. One job for the right hand side (i.e. the join).
-- 3. One job to join them (And in the darkness bind them...)
--
-- This is performed as a fold, like: @foldM planJoin head joins@. A
-- nice linked-list or tree-like structure arises. The planner code
-- produces a graph out of this; so it's possible that some
-- parallelism can be achieved by running multiple jobs at once.
--
-- The "head" is the first, original select. The "tail" is the
-- (indirectly) linked list of joins. That list may also be empty. In
-- that case, the tail is simply the same as the head.
--
-- If the tail is different to the head, then we choose the tail, as
-- it represents the joined up version of both. If they're the same,
-- we take whichever.
data HeadAndTail = HeadAndTail
{ head :: Ref,
tail :: Ref
}
-- | We're simply accumulating a set of actions with this. The counter
-- lets us generate unique refs.
data PlanState = PlanState
{ actions :: Seq PlannedAction,
counter :: Int
}
-- | Simple monad to collect actions.
newtype Plan a = Plan
{ unPlan :: State PlanState a
}
deriving (Functor, Applicative, Monad, MonadState PlanState)
--------------------------------------------------------------------------------
-- Conversions
-- | Note that we're intentionally discarding the table qualification.
toFieldName :: MySQL.FieldName -> FieldName
toFieldName (MySQL.FieldName {fName = t}) = FieldName t
joinAliasName :: MySQL.EntityAlias -> Text
joinAliasName (MySQL.EntityAlias {entityAliasText}) = entityAliasText
-- | Used for display purposes, not semantic content.
selectFromName :: MySQL.From -> Text
selectFromName =
\case
MySQL.FromQualifiedTable (MySQL.Aliased {aliasedThing = MySQL.TableName {name}}) ->
name
MySQL.FromSelect (MySQL.Aliased {aliasedThing = MySQL.Select {selectFrom}}) ->
selectFromName selectFrom
--------------------------------------------------------------------------------
-- Run planner
runPlan :: Plan r -> (r, [PlannedAction])
runPlan =
second (toList . actions)
. flip runState (PlanState {actions = mempty, counter = 0})
. unPlan
--------------------------------------------------------------------------------
-- Planners
-- | See the documentation for 'HeadAndTail'.
planSelectHeadAndTail :: Maybe Relationship -> Maybe Text -> MySQL.Select -> Plan HeadAndTail
planSelectHeadAndTail relationship joinExtractPath select0 = do
ref <- generate (selectFromName (MySQL.selectFrom select0))
let select = fromSelect relationship joinExtractPath select0
action = SelectAction select
tell PlannedAction {ref, action}
joinsFinalRef <- foldM planJoin ref (selectHaskellJoins select)
pure
( let head = ref
tail = case selectHaskellJoins select of
[] -> ref
_ -> joinsFinalRef
in HeadAndTail {head, tail}
)
-- | Given a left-hand-side table and a join spec, produce a single
-- reference that refers to the composition of the two.
planJoin :: Ref -> MySQL.Join -> Plan Ref
planJoin leftRecordSet join' = do
ref <- generate (joinAliasName (MySQL.joinRightTable join'))
rightRecordSet <-
fmap
(\HeadAndTail {..} -> tail)
( planSelectHeadAndTail
( Just
( Relationship
{ leftRecordSet,
joinType = MySQL.joinType join',
rightTable = MySQL.joinRightTable join'
}
)
)
Nothing
(MySQL.joinSelect join')
)
let action =
JoinAction
Join
{ leftRecordSet,
rightRecordSet,
wantedFields = MySQL.selectFinalWantedFields (MySQL.joinSelect join'),
joinRhsTop = MySQL.joinTop join',
joinRhsOffset = MySQL.joinOffset join',
joinFieldName = MySQL.joinFieldName join',
joinType = MySQL.joinType join',
..
}
tell PlannedAction {ref, action}
pure ref
--------------------------------------------------------------------------------
-- Monad helpers
-- | Write the planned action to the state, like a writer's @tell@.
tell :: PlannedAction -> Plan ()
tell action = modify' (\s -> s {actions = actions s Seq.:|> action})
-- | Generate a unique reference with a label for debugging.
generate :: Text -> Plan Ref
generate text = do
idx <- gets counter
modify' (\s -> s {counter = counter s + 1})
pure (Ref {idx, text})
--------------------------------------------------------------------------------
-- Graphing the plan to a forest
-- | Graph the set of planned actions ready for execution in the correct order.
actionsForest :: (Graph -> Graph) -> [PlannedAction] -> Forest PlannedAction
actionsForest transform actions =
let (graph, vertex2Node, _key2Vertex) =
graphFromEdges
( map
( \PlannedAction {ref, action} ->
( action,
ref,
map
(\PlannedAction {ref = r} -> r)
(filter (elem ref . plannedActionRefs) actions)
)
)
actions
)
in fmap
( fmap
((\(action, ref, _refs) -> PlannedAction {ref, action}) . vertex2Node)
)
(dff (transform graph))
where
plannedActionRefs PlannedAction {action} =
case action of
SelectAction Select {selectRelationship} ->
case selectRelationship of
Just Relationship {leftRecordSet} -> [leftRecordSet]
Nothing -> mempty
JoinAction Join {leftRecordSet, rightRecordSet} ->
[leftRecordSet, rightRecordSet]
--------------------------------------------------------------------------------
-- Build a query
-- | Used by the executor to produce a plain old select that can be
-- sent to the MySQL server.
selectQuery :: Select -> MySQL.Select
selectQuery Select {..} =
MySQL.Select
{ selectJoins = selectHaskellJoins,
selectProjections = OSet.fromList selectProjections,
selectFinalWantedFields = selectWantedFields,
..
}
-- | From a plain select, and possibly a parent/left-hand-side
-- relationship, produce a select that is useful for execution.
fromSelect :: Maybe Relationship -> Maybe Text -> MySQL.Select -> Select
fromSelect selectRelationship selectAggUnwrap select@MySQL.Select {..} =
Select
{ selectHaskellJoins = selectJoins,
selectWantedFields = MySQL.selectFinalWantedFields select,
selectGroupBy = [],
selectProjections = toList selectProjections,
..
}

View File

@ -1,857 +0,0 @@
-- | Translate from the DML to the MySQL dialect.
module Hasura.Backends.MySQL.FromIr
( fromSelectRows,
mkSQLSelect,
fromRootField,
FromIr,
Error (..),
runFromIr,
)
where
import Control.Monad.Validate
import Data.HashMap.Strict qualified as HashMap
import Data.HashSet.InsOrd qualified as OSet
import Data.Map.Strict (Map)
import Data.Map.Strict qualified as M
import Data.Proxy
import Data.Text qualified as T
import Database.MySQL.Base.Types qualified as MySQL
import Hasura.Backends.MySQL.Instances.Types ()
import Hasura.Backends.MySQL.Types
import Hasura.Prelude hiding (GT)
import Hasura.RQL.IR qualified as IR
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Column qualified as IR
import Hasura.RQL.Types.Common qualified as IR
import Hasura.RQL.Types.Relationships.Local qualified as IR
data FieldSource
= ExpressionFieldSource (Aliased Expression)
| JoinFieldSource (Aliased Join)
| AggregateFieldSource [Aliased Aggregate]
deriving (Eq, Show)
-- | Most of these errors should be checked for legitimacy.
data Error
= UnsupportedOpExpG (IR.OpExpG 'MySQL Expression)
| IdentifierNotSupported
| FunctionNotSupported
| NativeQueryNotSupported
| NodesUnsupportedForNow
| ConnectionsNotSupported
deriving (Show, Eq)
-- | The base monad used throughout this module for all conversion
-- functions.
--
-- It's a Validate, so it'll continue going when it encounters errors
-- to accumulate as many as possible.
--
-- It also contains a mapping from entity prefixes to counters. So if
-- my prefix is "table" then there'll be a counter that lets me
-- generate table1, table2, etc. Same for any other prefix needed
-- (e.g. names for joins).
--
-- A ReaderT is used around this in most of the module too, for
-- setting the current entity that a given field name refers to. See
-- @fromColumn@.
newtype FromIr a = FromIr
{ unFromIr :: StateT (Map Text Int) (Validate (NonEmpty Error)) a
}
deriving (Functor, Applicative, Monad, MonadValidate (NonEmpty Error))
--------------------------------------------------------------------------------
-- Runners
runFromIr :: FromIr a -> Validate (NonEmpty Error) a
runFromIr fromIr = evalStateT (unFromIr fromIr) mempty
data NameTemplate
= ArrayRelationTemplate Text
| ArrayAggregateTemplate Text
| ObjectRelationTemplate Text
| TableTemplate Text
| ForOrderAlias Text
| IndexTemplate
generateEntityAlias :: NameTemplate -> FromIr Text
generateEntityAlias template = do
FromIr (modify' (M.insertWith (+) prefix start))
i <- FromIr get
pure (prefix <> tshow (fromMaybe start (M.lookup prefix i)))
where
start = 1
prefix = T.take 20 rendered
rendered =
case template of
ArrayRelationTemplate sample -> "ar_" <> sample
ArrayAggregateTemplate sample -> "aa_" <> sample
ObjectRelationTemplate sample -> "or_" <> sample
TableTemplate sample -> "t_" <> sample
ForOrderAlias sample -> "order_" <> sample
IndexTemplate -> "idx_"
-- | This is really the start where you query the base table,
-- everything else is joins attached to it.
fromQualifiedTable :: TableName -> FromIr From
fromQualifiedTable schemadTableName@(TableName {name}) = do
alias <- generateEntityAlias (TableTemplate name)
pure
( FromQualifiedTable
( Aliased
{ aliasedThing =
schemadTableName,
aliasedAlias = alias
}
)
)
fromAlias :: From -> EntityAlias
fromAlias (FromQualifiedTable Aliased {aliasedAlias}) = EntityAlias aliasedAlias
fromAlias (FromSelect Aliased {aliasedAlias}) = EntityAlias aliasedAlias
trueExpression :: Expression
trueExpression = ValueExpression (BitValue True)
existsFieldName :: Text
existsFieldName = "exists_placeholder"
fromGExists :: IR.GExists 'MySQL Expression -> ReaderT EntityAlias FromIr Select
fromGExists IR.GExists {_geTable, _geWhere} = do
selectFrom <- lift (fromQualifiedTable _geTable)
whereExpression <-
local (const (fromAlias selectFrom)) (fromGBoolExp _geWhere)
pure
Select
{ selectOrderBy = Nothing,
selectProjections =
OSet.fromList
[ ExpressionProjection
( Aliased
{ aliasedThing = trueExpression,
aliasedAlias = existsFieldName
}
)
],
selectFrom = selectFrom,
selectGroupBy = [],
selectJoins = mempty,
selectWhere = Where [whereExpression],
selectSqlTop = NoTop,
selectSqlOffset = Nothing,
selectFinalWantedFields = Nothing
}
fromGBoolExp :: IR.GBoolExp 'MySQL Expression -> ReaderT EntityAlias FromIr Expression
fromGBoolExp = do
\case
IR.BoolAnd expressions ->
fmap AndExpression (traverse fromGBoolExp expressions)
IR.BoolOr expressions ->
fmap OrExpression (traverse fromGBoolExp expressions)
IR.BoolNot expression ->
fmap NotExpression (fromGBoolExp expression)
IR.BoolExists gExists ->
fmap ExistsExpression (fromGExists gExists)
IR.BoolField expression ->
pure expression
fromAnnBoolExp ::
IR.GBoolExp 'MySQL (IR.AnnBoolExpFld 'MySQL Expression) ->
ReaderT EntityAlias FromIr Expression
fromAnnBoolExp boolExp = do
fields <- traverse fromAnnBoolExpFld boolExp
fromGBoolExp fields
-- | For boolean operators, various comparison operators used need
-- special handling to ensure that SQL Server won't outright reject
-- the comparison. See also 'shouldCastToVarcharMax'.
fromColumnInfoForBoolExp :: IR.ColumnInfo 'MySQL -> ReaderT EntityAlias FromIr Expression
fromColumnInfoForBoolExp IR.ColumnInfo {ciColumn = column, ciType = _ciType} = do
fieldName <- columnNameToFieldName column <$> ask
pure (ColumnExpression fieldName)
fromAnnBoolExpFld ::
IR.AnnBoolExpFld 'MySQL Expression ->
ReaderT EntityAlias FromIr Expression
fromAnnBoolExpFld =
\case
IR.AVColumn columnInfo opExpGs -> do
expression <- fromColumnInfoForBoolExp columnInfo
expressions <- traverse (lift . fromOpExpG expression) opExpGs
pure (AndExpression expressions)
IR.AVRelationship IR.RelInfo {riMapping = mapping, riRTable = table} (IR.RelationshipFilters tablePerms annBoolExp) -> do
selectFrom <- lift (fromQualifiedTable table)
foreignKeyConditions <- fromMapping selectFrom mapping
whereExpression <-
local (const (fromAlias selectFrom)) (fromAnnBoolExp (IR.BoolAnd [tablePerms, annBoolExp]))
pure
( ExistsExpression
Select
{ selectOrderBy = Nothing,
selectProjections =
OSet.fromList
[ ExpressionProjection
( Aliased
{ aliasedThing = trueExpression,
aliasedAlias = existsFieldName
}
)
],
selectFrom = selectFrom,
selectGroupBy = [],
selectJoins = mempty,
selectWhere = Where (foreignKeyConditions <> [whereExpression]),
selectSqlTop = NoTop,
selectSqlOffset = Nothing,
selectFinalWantedFields = Nothing
}
)
-- | The context given by the reader is of the previous/parent
-- "remote" table. The WHERE that we're generating goes in the child,
-- "local" query. The @From@ passed in as argument is the local table.
--
-- We should hope to see e.g. "post.category = category.id" for a
-- local table of post and a remote table of category.
--
-- The left/right columns in @HashMap Column Column@ corresponds
-- to the left/right of @select ... join ...@. Therefore left=remote,
-- right=local in this context.
fromMapping ::
From ->
HashMap Column Column ->
ReaderT EntityAlias FromIr [Expression]
fromMapping localFrom = traverse columnsToEqs . HashMap.toList
where
columnsToEqs (remoteColumn, localColumn) = do
localFieldName <- local (const (fromAlias localFrom)) (fromColumn localColumn)
remoteFieldName <- fromColumn remoteColumn
pure
( OpExpression
EQ'
(ColumnExpression localFieldName)
(ColumnExpression remoteFieldName)
)
fromColumn :: Column -> ReaderT EntityAlias FromIr FieldName
fromColumn column = columnNameToFieldName column <$> ask
columnNameToFieldName :: Column -> EntityAlias -> FieldName
columnNameToFieldName (Column fieldName) EntityAlias {entityAliasText = fieldNameEntity} =
FieldName {fName = fieldName, fNameEntity = fieldNameEntity}
fromOpExpG :: Expression -> IR.OpExpG 'MySQL Expression -> FromIr Expression
fromOpExpG expression op =
case op of
IR.AEQ True val -> do
pure $ OpExpression EQ' expression val
_ -> refute (pure (UnsupportedOpExpG op))
data Args = Args
{ argsWhere :: Where,
argsOrderBy :: Maybe (NonEmpty OrderBy),
argsJoins :: [Join],
argsTop :: Top,
argsOffset :: Maybe Int,
argsDistinct :: Proxy (Maybe (NonEmpty FieldName)),
argsExistingJoins :: Map TableName EntityAlias
}
deriving (Show)
data UnfurledJoin = UnfurledJoin
{ unfurledJoin :: Join,
-- | Recorded if we joined onto an object relation.
unfurledObjectTableAlias :: Maybe (TableName, EntityAlias)
}
deriving (Show)
fromColumnInfo :: IR.ColumnInfo 'MySQL -> ReaderT EntityAlias FromIr FieldName
fromColumnInfo IR.ColumnInfo {ciColumn = column} =
columnNameToFieldName column <$> ask
tableNameText :: TableName -> Text
tableNameText (TableName {name}) = name
aggFieldName :: Text
aggFieldName = "agg"
-- | Unfurl the nested set of object relations (tell'd in the writer)
-- that are terminated by field name (IR.AOCColumn and
-- IR.AOCArrayAggregation).
unfurlAnnOrderByElement ::
IR.AnnotatedOrderByElement 'MySQL Expression ->
WriterT (Seq UnfurledJoin) (ReaderT EntityAlias FromIr) (FieldName, Maybe ScalarType)
unfurlAnnOrderByElement =
\case
IR.AOCColumn columnInfo -> do
fieldName <- lift (fromColumnInfo columnInfo)
pure
( fieldName,
case IR.ciType columnInfo of
IR.ColumnScalar t -> Just t
_ -> Nothing
)
IR.AOCObjectRelation IR.RelInfo {riRTable = table} annBoolExp annOrderByElementG -> do
selectFrom <- lift (lift (fromQualifiedTable table))
joinAliasEntity <-
lift (lift (generateEntityAlias (ForOrderAlias (tableNameText table))))
whereExpression <-
lift (local (const (fromAlias selectFrom)) (fromAnnBoolExp annBoolExp))
tell
( pure
UnfurledJoin
{ unfurledJoin =
Join
{ joinSelect =
Select
{ selectProjections = OSet.fromList [StarProjection],
selectSqlTop = NoTop,
selectSqlOffset = Nothing,
selectFrom = selectFrom,
selectJoins = [],
selectWhere =
Where [whereExpression],
selectOrderBy = Nothing,
selectFinalWantedFields = Nothing,
selectGroupBy = []
},
joinRightTable = fromAlias selectFrom,
joinType = OnlessJoin,
joinFieldName = name table,
joinTop = NoTop,
joinOffset = Nothing
},
unfurledObjectTableAlias = Just (table, EntityAlias joinAliasEntity)
}
)
local
(const (EntityAlias joinAliasEntity))
(unfurlAnnOrderByElement annOrderByElementG)
IR.AOCArrayAggregation IR.RelInfo {riMapping = mapping, riRTable = tableName} annBoolExp annAggregateOrderBy -> do
selectFrom <- lift (lift (fromQualifiedTable tableName))
let alias = aggFieldName
joinAliasEntity <-
lift (lift (generateEntityAlias (ForOrderAlias (tableNameText tableName))))
foreignKeyConditions <- lift (fromMapping selectFrom mapping)
whereExpression <-
lift (local (const (fromAlias selectFrom)) (fromAnnBoolExp annBoolExp))
aggregate <-
lift
( local
(const (fromAlias selectFrom))
( case annAggregateOrderBy of
IR.AAOCount -> pure (CountAggregate StarCountable)
IR.AAOOp text _resultType columnInfo -> do
fieldName <- fromColumnInfo columnInfo
pure (OpAggregate text (pure (ColumnExpression fieldName)))
)
)
tell
( pure
( UnfurledJoin
{ unfurledJoin =
Join
{ joinSelect =
Select
{ selectProjections =
OSet.fromList
[ AggregateProjection
Aliased
{ aliasedThing = aggregate,
aliasedAlias = alias
}
],
selectSqlTop = NoTop,
selectGroupBy = [],
selectFrom = selectFrom,
selectJoins = [],
selectWhere =
Where
(foreignKeyConditions <> [whereExpression]),
selectOrderBy = Nothing,
selectSqlOffset = Nothing,
selectFinalWantedFields = Nothing
},
joinFieldName = "",
joinRightTable = EntityAlias "",
joinType = OnlessJoin,
joinTop = NoTop,
joinOffset = Nothing
},
unfurledObjectTableAlias = Nothing
}
)
)
pure
( FieldName {fNameEntity = joinAliasEntity, fName = alias},
Nothing
)
-- | Produce a valid ORDER BY construct, telling about any joins
-- needed on the side.
fromAnnOrderByItemG ::
IR.AnnotatedOrderByItemG 'MySQL Expression ->
WriterT (Seq UnfurledJoin) (ReaderT EntityAlias FromIr) OrderBy
fromAnnOrderByItemG IR.OrderByItemG {obiType, obiColumn = obiColumn, obiNulls} = do
(orderByFieldName, orderByType) <- unfurlAnnOrderByElement obiColumn
let orderByNullsOrder = fromMaybe NullsAnyOrder obiNulls
orderByOrder = fromMaybe Asc obiType
pure OrderBy {..}
fromSelectArgsG :: IR.SelectArgsG 'MySQL Expression -> ReaderT EntityAlias FromIr Args
fromSelectArgsG selectArgsG = do
let argsOffset = fromIntegral <$> moffset
argsWhere <-
maybe (pure mempty) (fmap (Where . pure) . fromAnnBoolExp) mannBoolExp
argsTop <-
maybe (pure mempty) (pure . Top) mlimit
let argsDistinct = Proxy
(argsOrderBy, joins) <-
runWriterT (traverse fromAnnOrderByItemG (maybe [] toList orders))
-- Any object-relation joins that we generated, we record their
-- generated names into a mapping.
let argsExistingJoins =
M.fromList (mapMaybe unfurledObjectTableAlias (toList joins))
pure
Args
{ argsJoins = toList (fmap unfurledJoin joins),
argsOrderBy = nonEmpty argsOrderBy,
..
}
where
IR.SelectArgs
{ _saWhere = mannBoolExp,
_saLimit = mlimit,
_saOffset = moffset,
_saOrderBy = orders
} = selectArgsG
-- | Here is where we project a field as a column expression. If
-- number stringification is on, then we wrap it in a
-- 'ToStringExpression' so that it's casted when being projected.
fromAnnColumnField ::
IR.AnnColumnField 'MySQL Expression ->
ReaderT EntityAlias FromIr Expression
fromAnnColumnField annColumnField = do
fieldName <- fromColumn column
if typ == IR.ColumnScalar MySQL.Geometry
then pure $ MethodExpression (ColumnExpression fieldName) "STAsText" []
else pure (ColumnExpression fieldName)
where
IR.AnnColumnField
{ _acfColumn = column,
_acfType = typ,
_acfAsText = _asText :: Bool,
_acfArguments = _ :: Maybe Void
} = annColumnField
fromRelName :: IR.RelName -> FromIr Text
fromRelName relName =
pure (IR.relNameToTxt relName)
-- fromAggregateField :: IR.AggregateField 'MySQL -> ReaderT EntityAlias FromIr Aggregate
-- fromAggregateField aggregateField =
-- case aggregateField of
-- IR.AFExp text -> pure (TextAggregate text)
-- IR.AFCount countType -> CountAggregate <$> case countType of
-- StarCountable -> pure StarCountable
-- NonNullFieldCountable names -> NonNullFieldCountable <$> traverse fromColumn names
-- DistinctCountable names -> DistinctCountable <$> traverse fromColumn names
-- IR.AFOp _ -> error "fromAggregatefield: not implemented"
fromTableAggregateFieldG ::
(IR.FieldName, IR.TableAggregateFieldG 'MySQL Void Expression) -> ReaderT EntityAlias FromIr FieldSource
fromTableAggregateFieldG (IR.FieldName _name, _field) = error "fromTableAggregateFieldG: not implemented yet"
fieldSourceProjections :: FieldSource -> [Projection]
fieldSourceProjections =
\case
ExpressionFieldSource aliasedExpression ->
pure (ExpressionProjection aliasedExpression)
JoinFieldSource Aliased {aliasedThing = Join {..}} ->
map
( \(_left, right@(FieldName {fName})) ->
ExpressionProjection
Aliased
{ aliasedAlias = fName,
aliasedThing = ColumnExpression right
}
)
fields
where
fields =
case joinType of
ArrayJoin fs -> fs
ObjectJoin fs -> fs
ArrayAggregateJoin fs -> fs
OnlessJoin -> mempty
AggregateFieldSource aggregates -> fmap AggregateProjection aggregates
fieldSourceJoin :: FieldSource -> Maybe Join
fieldSourceJoin =
\case
JoinFieldSource aliasedJoin -> pure (aliasedThing aliasedJoin)
ExpressionFieldSource {} -> Nothing
AggregateFieldSource {} -> Nothing
fromSelectAggregate ::
Maybe (EntityAlias, HashMap Column Column) ->
IR.AnnSelectG 'MySQL (IR.TableAggregateFieldG 'MySQL Void) Expression ->
FromIr Select
fromSelectAggregate mparentRelationship annSelectG = do
selectFrom <-
case from of
IR.FromTable qualifiedObject -> fromQualifiedTable qualifiedObject
IR.FromIdentifier {} -> refute $ pure IdentifierNotSupported
IR.FromFunction {} -> refute $ pure FunctionNotSupported
IR.FromNativeQuery {} -> refute $ pure NativeQueryNotSupported
IR.FromStoredProcedure {} -> error "fromSelectAggregate: FromStoredProcedure"
_mforeignKeyConditions <- fmap (Where . fromMaybe []) $
for mparentRelationship $
\(entityAlias, mapping) ->
runReaderT (fromMapping selectFrom mapping) entityAlias
fieldSources <-
runReaderT (traverse fromTableAggregateFieldG fields) (fromAlias selectFrom)
filterExpression <-
runReaderT (fromAnnBoolExp permFilter) (fromAlias selectFrom)
Args
{ argsOrderBy,
argsWhere,
argsJoins,
argsTop,
argsDistinct = Proxy,
argsOffset
} <-
runReaderT (fromSelectArgsG args) (fromAlias selectFrom)
let selectProjections =
concatMap (toList . fieldSourceProjections) fieldSources
pure
Select
{ selectProjections = OSet.fromList selectProjections,
selectFrom = selectFrom,
selectJoins = argsJoins <> mapMaybe fieldSourceJoin fieldSources,
selectWhere = argsWhere <> Where [filterExpression],
selectOrderBy = argsOrderBy,
selectSqlOffset = argsOffset,
selectSqlTop = permissionBasedTop <> argsTop,
selectFinalWantedFields = Nothing,
selectGroupBy = []
}
where
permissionBasedTop =
maybe NoTop Top mPermLimit
IR.AnnSelectG
{ _asnFields = fields,
_asnFrom = from,
_asnPerm = perm,
_asnArgs = args,
_asnStrfyNum = _num,
_asnNamingConvention = _tCase
} = annSelectG
IR.TablePerm {_tpLimit = mPermLimit, _tpFilter = permFilter} = perm
-- _fromTableAggFieldG ::
-- (Int, (IR.FieldName, IR.TableAggregateFieldG 'MySQL Void Expression)) ->
-- Maybe (ReaderT EntityAlias FromIr (Int, (IR.FieldName, [Projection])))
-- _fromTableAggFieldG = \case
-- (index, (fieldName, IR.TAFAgg (aggregateFields :: [(IR.FieldName, IR.AggregateField 'MySQL)]))) -> Just do
-- aggregates <-
-- for aggregateFields \(fieldName', aggregateField) ->
-- fromAggregateField aggregateField <&> \aliasedThing ->
-- Aliased {aliasedAlias = IR.getFieldNameTxt fieldName', ..}
-- pure (index, (fieldName, fieldSourceProjections $ AggregateFieldSource aggregates))
-- _ -> Nothing
-- _fromTableNodesFieldG ::
-- Map TableName EntityAlias ->
-- StringifyNumbers ->
-- (Int, (IR.FieldName, IR.TableAggregateFieldG 'MySQL Void Expression)) ->
-- Maybe (ReaderT EntityAlias FromIr (Int, (IR.FieldName, [Projection])))
-- _fromTableNodesFieldG argsExistingJoins stringifyNumbers = \case
-- (index, (fieldName, IR.TAFNodes () (annFieldsG :: [(IR.FieldName, IR.AnnFieldG 'MySQL Void Expression)]))) -> Just do
-- fieldSources' <- fromAnnFieldsG argsExistingJoins stringifyNumbers `traverse` annFieldsG
-- let nodesProjections' :: [Projection] = concatMap fieldSourceProjections fieldSources'
-- pure (index, (fieldName, nodesProjections'))
-- _ -> Nothing
-- -- | Get FieldSource from a TAFExp type table aggregate field
-- _fromTableExpFieldG ::
-- (Int, (IR.FieldName, IR.TableAggregateFieldG 'MySQL Void Expression)) ->
-- Maybe (ReaderT EntityAlias FromIr (Int, [Projection]))
-- _fromTableExpFieldG = \case
-- (index, (IR.FieldName name, IR.TAFExp text)) -> Just $
-- pure
-- (index, fieldSourceProjections $
-- ExpressionFieldSource
-- Aliased
-- { aliasedThing = ValueExpression (TextValue text)
-- , aliasedAlias = name
-- })
-- _ -> Nothing
fromArrayAggregateSelectG ::
IR.AnnRelationSelectG 'MySQL (IR.AnnAggregateSelectG 'MySQL Void Expression) ->
ReaderT EntityAlias FromIr Join
fromArrayAggregateSelectG annRelationSelectG = do
fieldName <- lift (fromRelName _aarRelationshipName)
joinSelect' <- do
lhsEntityAlias <- ask
-- With this, the foreign key relations are injected automatically
-- at the right place by fromSelectAggregate.
lift (fromSelectAggregate (pure (lhsEntityAlias, mapping)) annSelectG)
alias <- lift (generateEntityAlias (ArrayAggregateTemplate fieldName))
joinOn <- fromMappingFieldNames (EntityAlias alias) mapping
pure
Join
{ joinSelect = joinSelect' {selectSqlTop = NoTop, selectSqlOffset = Nothing},
joinFieldName = "",
joinRightTable = EntityAlias "",
joinType = ArrayAggregateJoin joinOn,
joinTop = selectSqlTop joinSelect',
joinOffset = selectSqlOffset joinSelect'
}
where
IR.AnnRelationSelectG
{ _aarRelationshipName,
_aarColumnMapping = mapping :: HashMap Column Column,
_aarAnnSelect = annSelectG
} = annRelationSelectG
fromArraySelectG :: IR.ArraySelectG 'MySQL Void Expression -> ReaderT EntityAlias FromIr Join
fromArraySelectG =
\case
IR.ASSimple arrayRelationSelectG ->
fromArrayRelationSelectG arrayRelationSelectG
IR.ASAggregate arrayAggregateSelectG ->
fromArrayAggregateSelectG arrayAggregateSelectG
fromObjectRelationSelectG ::
IR.ObjectRelationSelectG 'MySQL Void Expression ->
ReaderT EntityAlias FromIr Join
fromObjectRelationSelectG annRelationSelectG = do
from <- lift $ fromQualifiedTable tableFrom
let entityAlias :: EntityAlias = fromAlias from
fieldSources <-
local
(const entityAlias)
(traverse fromAnnFieldsG fields)
let selectProjections =
concatMap (toList . fieldSourceProjections) fieldSources
filterExpression <- local (const entityAlias) (fromAnnBoolExp tableFilter)
joinOn <- fromMappingFieldNames entityAlias mapping
let joinFieldProjections =
map
( \(fieldName', _) ->
FieldNameProjection
Aliased
{ aliasedThing = fieldName',
aliasedAlias = fName fieldName'
}
)
joinOn
joinFieldName <- lift (fromRelName _aarRelationshipName)
pure
Join
{ joinSelect =
Select
{ selectOrderBy = Nothing,
selectProjections =
OSet.fromList joinFieldProjections
<> OSet.fromList selectProjections, -- Ordering is right-biased.
selectGroupBy = [],
selectFrom = from,
selectJoins = mapMaybe fieldSourceJoin fieldSources,
selectWhere = Where [filterExpression],
selectSqlTop = NoTop,
selectSqlOffset = Nothing,
selectFinalWantedFields = pure (fieldTextNames fields)
},
joinFieldName,
joinRightTable = EntityAlias "",
joinType = ObjectJoin joinOn,
joinTop = NoTop,
joinOffset = Nothing
}
where
IR.AnnObjectSelectG
{ _aosFields = fields :: IR.AnnFieldsG 'MySQL Void Expression,
_aosTableFrom = tableFrom :: TableName,
_aosTableFilter = tableFilter :: IR.AnnBoolExp 'MySQL Expression
} = annObjectSelectG
IR.AnnRelationSelectG
{ _aarRelationshipName,
_aarColumnMapping = mapping :: HashMap Column Column,
_aarAnnSelect = annObjectSelectG :: IR.AnnObjectSelectG 'MySQL Void Expression
} = annRelationSelectG
isEmptyExpression :: Expression -> Bool
isEmptyExpression (AndExpression []) = True
isEmptyExpression (OrExpression []) = True
isEmptyExpression _ = False
fromSelectRows :: IR.AnnSelectG 'MySQL (IR.AnnFieldG 'MySQL Void) Expression -> FromIr Select
fromSelectRows annSelectG = do
selectFrom <-
case from of
IR.FromTable qualifiedObject -> fromQualifiedTable qualifiedObject
IR.FromIdentifier {} -> refute $ pure IdentifierNotSupported
IR.FromFunction {} -> refute $ pure FunctionNotSupported
IR.FromNativeQuery {} -> refute $ pure NativeQueryNotSupported
IR.FromStoredProcedure {} -> error "fromSelectRow: FromStoredProcedure"
Args
{ argsOrderBy,
argsWhere,
argsJoins,
argsDistinct = Proxy,
argsOffset,
argsTop
} <-
runReaderT (fromSelectArgsG args) (fromAlias selectFrom)
fieldSources <-
runReaderT
(traverse fromAnnFieldsG fields)
(fromAlias selectFrom)
filterExpression <-
runReaderT (fromAnnBoolExp permFilter) (fromAlias selectFrom)
let selectProjections =
concatMap (toList . fieldSourceProjections) fieldSources
pure
Select
{ selectOrderBy = argsOrderBy,
selectGroupBy = [],
selectProjections = OSet.fromList selectProjections,
selectFrom = selectFrom,
selectJoins = argsJoins <> mapMaybe fieldSourceJoin fieldSources,
selectWhere = argsWhere <> Where ([filterExpression | not (isEmptyExpression filterExpression)]),
selectSqlOffset = argsOffset,
selectSqlTop = permissionBasedTop <> argsTop,
selectFinalWantedFields = pure (fieldTextNames fields)
}
where
permissionBasedTop =
maybe NoTop Top mPermLimit
IR.AnnSelectG
{ _asnFields = fields,
_asnFrom = from,
_asnPerm = perm,
_asnArgs = args,
_asnNamingConvention = _tCase
} = annSelectG
IR.TablePerm {_tpLimit = mPermLimit, _tpFilter = permFilter} = perm
fromArrayRelationSelectG :: IR.ArrayRelationSelectG 'MySQL Void Expression -> ReaderT EntityAlias FromIr Join
fromArrayRelationSelectG annRelationSelectG = do
joinFieldName <- lift (fromRelName _aarRelationshipName)
sel <- lift (fromSelectRows annSelectG)
joinOn <- fromMappingFieldNames (fromAlias (selectFrom sel)) mapping
let joinFieldProjections =
map
( \(fieldName', _) ->
FieldNameProjection
Aliased
{ aliasedThing = fieldName',
aliasedAlias = fName fieldName'
}
)
joinOn
pure
Join
{ joinSelect =
sel
{ selectProjections =
OSet.fromList joinFieldProjections <> selectProjections sel,
-- Above: Ordering is right-biased.
selectSqlTop = NoTop,
selectSqlOffset = Nothing
},
joinRightTable = fromAlias (selectFrom sel),
joinType = ArrayJoin joinOn,
-- Above: Needed by DataLoader to determine the type of
-- Haskell-native join to perform.
joinFieldName,
joinTop = selectSqlTop sel,
joinOffset = selectSqlOffset sel
}
where
IR.AnnRelationSelectG
{ _aarRelationshipName,
_aarColumnMapping = mapping :: HashMap Column Column,
_aarAnnSelect = annSelectG
} = annRelationSelectG
-- | The main sources of fields, either constants, fields or via joins.
fromAnnFieldsG ::
(IR.FieldName, IR.AnnFieldG 'MySQL Void Expression) ->
ReaderT EntityAlias FromIr FieldSource
fromAnnFieldsG (IR.FieldName name, field) =
case field of
IR.AFColumn annColumnField -> do
expression <- fromAnnColumnField annColumnField
pure
( ExpressionFieldSource
Aliased {aliasedThing = expression, aliasedAlias = name}
)
IR.AFExpression text ->
pure
( ExpressionFieldSource
Aliased
{ aliasedThing = ValueExpression (TextValue text),
aliasedAlias = name
}
)
IR.AFObjectRelation objectRelationSelectG ->
fmap
( \aliasedThing ->
JoinFieldSource (Aliased {aliasedThing, aliasedAlias = name})
)
(fromObjectRelationSelectG objectRelationSelectG)
IR.AFArrayRelation arraySelectG ->
fmap
( \aliasedThing ->
JoinFieldSource (Aliased {aliasedThing, aliasedAlias = name})
)
(fromArraySelectG arraySelectG)
mkSQLSelect ::
IR.JsonAggSelect ->
IR.AnnSelectG 'MySQL (IR.AnnFieldG 'MySQL Void) Expression ->
FromIr Select
mkSQLSelect jsonAggSelect annSimpleSel = do
case jsonAggSelect of
IR.JASMultipleRows -> fromSelectRows annSimpleSel
IR.JASSingleObject ->
fromSelectRows annSimpleSel <&> \sel ->
sel
{ selectSqlTop = Top 1
}
-- | Convert from the IR database query into a select.
fromRootField :: IR.QueryDB 'MySQL Void Expression -> FromIr Select
fromRootField =
\case
(IR.QDBSingleRow s) -> mkSQLSelect IR.JASSingleObject s
(IR.QDBMultipleRows s) -> mkSQLSelect IR.JASMultipleRows s
(IR.QDBAggregation s) -> fromSelectAggregate Nothing s
fromMappingFieldNames ::
EntityAlias ->
HashMap Column Column ->
ReaderT EntityAlias FromIr [(FieldName, FieldName)]
fromMappingFieldNames localFrom =
traverse
( \(remoteColumn, localColumn) -> do
localFieldName <- local (const localFrom) (fromColumn localColumn)
remoteFieldName <- fromColumn remoteColumn
pure
( (,)
(localFieldName)
(remoteFieldName)
)
)
. HashMap.toList
fieldTextNames :: IR.AnnFieldsG 'MySQL Void Expression -> [Text]
fieldTextNames = fmap (\(IR.FieldName name, _) -> name)

View File

@ -1,16 +0,0 @@
{-# OPTIONS_GHC -fno-warn-orphans #-}
module Hasura.Backends.MySQL.Instances.API () where
import Hasura.Prelude
import Hasura.RQL.Types.BackendType
import Hasura.Server.API.Backend
instance BackendAPI 'MySQL where
metadataV1CommandParsers =
concat
[ sourceCommands @'MySQL,
tableCommands @'MySQL,
tablePermissionsCommands @'MySQL,
relationshipCommands @'MySQL
]

View File

@ -1,135 +0,0 @@
{-# OPTIONS_GHC -fno-warn-orphans #-}
module Hasura.Backends.MySQL.Instances.Execute () where
import Data.Aeson as J
import Data.Bifunctor
import Data.Coerce
import Data.HashMap.Strict.InsOrd qualified as InsOrdHashMap
import Data.Text qualified as T
import Data.Text.Encoding qualified as T
import Data.Tree
import Database.MySQL.Base (fetchFields, query, storeResult)
import Hasura.Backends.MySQL.Connection
import Hasura.Backends.MySQL.DataLoader.Execute (OutputValue (..), RecordSet (..))
import Hasura.Backends.MySQL.DataLoader.Execute qualified as DataLoader
import Hasura.Backends.MySQL.DataLoader.Plan qualified as DataLoader
import Hasura.Backends.MySQL.Plan
import Hasura.Backends.MySQL.ToQuery as ToQuery
import Hasura.Backends.MySQL.Types qualified as MySQL
import Hasura.Base.Error
import Hasura.EncJSON
import Hasura.GraphQL.Execute.Backend
import Hasura.GraphQL.Namespace
import Hasura.Prelude hiding (first, second)
import Hasura.RQL.IR
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Common
import Hasura.SQL.AnyBackend qualified as AB
import Hasura.Session
import Language.GraphQL.Draft.Syntax qualified as G
import Network.HTTP.Types qualified as HTTP
instance BackendExecute 'MySQL where
type PreparedQuery 'MySQL = Text
type MultiplexedQuery 'MySQL = Void
type ExecutionMonad 'MySQL = IdentityT
mkDBQueryPlan = mysqlDBQueryPlan
mkDBMutationPlan = error "mkDBMutationPlan: MySQL backend does not support this operation yet."
mkLiveQuerySubscriptionPlan _ _ _ _ _ _ = error "mkLiveQuerySubscriptionPlan: MySQL backend does not support this operation yet."
mkDBStreamingSubscriptionPlan _ _ _ _ _ = error "mkDBStreamingSubscriptionPlan: MySQL backend does not support this operation yet."
mkDBQueryExplain = mysqlDBQueryExplain
mkSubscriptionExplain _ = error "mkSubscriptionExplain: MySQL backend does not support this operation yet."
mkDBRemoteRelationshipPlan = error "mkDBRemoteRelationshipPlan: MySQL does not support this operation yet."
mysqlDBQueryPlan ::
forall m.
( MonadError QErr m
) =>
UserInfo ->
SourceName ->
SourceConfig 'MySQL ->
QueryDB 'MySQL Void (UnpreparedValue 'MySQL) ->
[HTTP.Header] ->
Maybe G.Name ->
m (DBStepInfo 'MySQL)
mysqlDBQueryPlan userInfo sourceName sourceConfig qrf _ _ = do
(headAndTail, actionsForest) <- queryToActionForest userInfo qrf
pure
( DBStepInfo
@'MySQL
sourceName
sourceConfig
(Just (T.pack (drawForest (fmap (fmap show) actionsForest))))
( OnBaseMonad do
result <-
DataLoader.runExecute
sourceConfig
headAndTail
(DataLoader.execute actionsForest)
either
(throw500WithDetail "MySQL DataLoader Error" . toJSON . show)
(pure . withNoStatistics . encJFromRecordSet)
result
)
()
)
--------------------------------------------------------------------------------
-- Encoding for Hasura's GraphQL JSON representation
mysqlDBQueryExplain ::
MonadError QErr m =>
RootFieldAlias ->
UserInfo ->
SourceName ->
SourceConfig 'MySQL ->
QueryDB 'MySQL Void (UnpreparedValue 'MySQL) ->
[HTTP.Header] ->
Maybe G.Name ->
m (AB.AnyBackend DBStepInfo)
mysqlDBQueryExplain fieldName userInfo sourceName sourceConfig qrf _ _ = do
select :: MySQL.Select <- planQuery (_uiSession userInfo) qrf
let sqlQuery = selectSQLTextForQuery select
sqlQueryText = (T.decodeUtf8 . unQuery . toQueryPretty) (ToQuery.fromSelect select)
explainResult = OnBaseMonad $
withMySQLPool
(MySQL.scConnectionPool sourceConfig)
\conn -> do
query conn ("EXPLAIN FORMAT=JSON " <> (unQuery sqlQuery))
result <- storeResult conn
fields <- fetchFields result
rows <- fetchAllRows result
let texts = concat $ parseTextRows fields rows
pure $ withNoStatistics $ encJFromJValue $ ExplainPlan fieldName (Just sqlQueryText) (Just texts)
pure $
AB.mkAnyBackend $
DBStepInfo @'MySQL sourceName sourceConfig Nothing explainResult ()
selectSQLTextForQuery :: MySQL.Select -> ToQuery.Query
selectSQLTextForQuery select = toQueryFlat $ ToQuery.fromSelect select
encJFromRecordSet :: RecordSet -> EncJSON
encJFromRecordSet RecordSet {rows} =
encJFromList
( map
( encJFromAssocList
. map (first coerce . second encJFromOutputValue)
. InsOrdHashMap.toList
)
(toList rows)
)
encJFromOutputValue :: DataLoader.OutputValue -> EncJSON
encJFromOutputValue =
\case
ArrayOutputValue array -> encJFromList (map encJFromOutputValue (toList array))
RecordOutputValue m ->
encJFromAssocList
. map (first coerce . second encJFromOutputValue)
. InsOrdHashMap.toList
$ m
ScalarOutputValue value -> encJFromJValue value
NullOutputValue {} -> encJFromJValue J.Null

View File

@ -1,30 +0,0 @@
{-# OPTIONS_GHC -fno-warn-orphans #-}
module Hasura.Backends.MySQL.Instances.Metadata () where
import Hasura.Backends.MySQL.Connection qualified as MySQL
import Hasura.Backends.MySQL.Schema.Introspection qualified as MySQL (listAllTables)
import Hasura.Prelude
import Hasura.RQL.DDL.Relationship (defaultBuildArrayRelationshipInfo, defaultBuildObjectRelationshipInfo)
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.EventTrigger (RecreateEventTriggers (RETDoNothing))
import Hasura.RQL.Types.Metadata.Backend
import Hasura.Server.Migrate.Version (SourceCatalogMigrationState (SCMSNotSupported))
instance BackendMetadata 'MySQL where
prepareCatalog _ = pure (RETDoNothing, SCMSNotSupported)
buildComputedFieldInfo = error "buildComputedFieldInfo: MySQL backend does not support this operation yet."
fetchAndValidateEnumValues = error "fetchAndValidateEnumValues: MySQL backend does not support this operation yet."
resolveSourceConfig = MySQL.resolveSourceConfig
resolveDatabaseMetadata _ _ = MySQL.resolveDatabaseMetadata
parseBoolExpOperations = error "parseBoolExpOperations: MySQL backend does not support this operation yet."
buildArrayRelationshipInfo _ = defaultBuildArrayRelationshipInfo
buildObjectRelationshipInfo _ = defaultBuildObjectRelationshipInfo
buildFunctionInfo = error "buildFunctionInfo: MySQL backend does not support this operation yet."
updateColumnInEventTrigger = error "updateColumnInEventTrigger: MySQL backend does not support this operation yet."
parseCollectableType = error "parseCollectableType: MySQL backend does not support this operation yet."
postDropSourceHook = MySQL.postDropSourceHook
buildComputedFieldBooleanExp _ _ _ _ _ _ =
error "buildComputedFieldBooleanExp: MySQL backend does not support this operation yet."
listAllTables = MySQL.listAllTables
supportsBeingRemoteRelationshipTarget _ = False

View File

@ -1,233 +0,0 @@
{-# LANGUAGE ApplicativeDo #-}
{-# LANGUAGE TemplateHaskellQuotes #-}
{-# OPTIONS_GHC -fno-warn-orphans #-}
module Hasura.Backends.MySQL.Instances.Schema () where
import Data.ByteString (ByteString)
import Data.HashMap.Strict qualified as HashMap
import Data.List.NonEmpty qualified as NE
import Data.Text.Encoding (encodeUtf8)
import Data.Text.Extended
import Database.MySQL.Base.Types qualified as MySQL
import Hasura.Backends.MySQL.Types qualified as MySQL
import Hasura.Base.Error
import Hasura.Base.ErrorMessage (toErrorMessage)
import Hasura.GraphQL.Schema.Backend
import Hasura.GraphQL.Schema.Build qualified as GSB
import Hasura.GraphQL.Schema.Common
import Hasura.GraphQL.Schema.NamingCase
import Hasura.GraphQL.Schema.Parser
( InputFieldsParser,
Kind (..),
MonadParse,
Parser,
)
import Hasura.GraphQL.Schema.Parser qualified as P
import Hasura.GraphQL.Schema.Select
import Hasura.Name qualified as Name
import Hasura.Prelude
import Hasura.RQL.IR
import Hasura.RQL.IR.Select qualified as IR
import Hasura.RQL.Types.Backend as RQL
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Column as RQL
import Hasura.RQL.Types.SchemaCache as RQL
import Language.GraphQL.Draft.Syntax qualified as GQL
instance BackendSchema 'MySQL where
buildTableQueryAndSubscriptionFields = GSB.buildTableQueryAndSubscriptionFields
buildTableRelayQueryFields _ _ _ _ _ = pure []
buildTableStreamingSubscriptionFields = GSB.buildTableStreamingSubscriptionFields
buildTableInsertMutationFields _ _ _ _ _ = pure []
buildTableUpdateMutationFields _ _ _ = pure []
buildTableDeleteMutationFields _ _ _ _ _ = pure []
buildFunctionQueryFields _ _ _ _ = pure []
buildFunctionRelayQueryFields _ _ _ _ _ = pure []
buildFunctionMutationFields _ _ _ _ = pure []
relayExtension = Nothing
nodesAggExtension = Just ()
streamSubscriptionExtension = Nothing
columnParser = columnParser'
enumParser = enumParser'
possiblyNullable = possiblyNullable'
scalarSelectionArgumentsParser _ = pure Nothing
orderByOperators _sourceInfo = orderByOperators'
comparisonExps = comparisonExps'
countTypeInput = mysqlCountTypeInput
aggregateOrderByCountType = error "aggregateOrderByCountType: MySQL backend does not support this operation yet."
computedField = error "computedField: MySQL backend does not support this operation yet."
instance BackendTableSelectSchema 'MySQL where
tableArguments = mysqlTableArgs
selectTable = defaultSelectTable
selectTableAggregate = defaultSelectTableAggregate
tableSelectionSet = defaultTableSelectionSet
mysqlTableArgs ::
forall r m n.
MonadBuildSchema 'MySQL r m n =>
TableInfo 'MySQL ->
SchemaT r m (InputFieldsParser n (IR.SelectArgsG 'MySQL (UnpreparedValue 'MySQL)))
mysqlTableArgs tableInfo = do
whereParser <- tableWhereArg tableInfo
orderByParser <- tableOrderByArg tableInfo
pure do
whereArg <- whereParser
orderByArg <- orderByParser
limitArg <- tableLimitArg
offsetArg <- tableOffsetArg
pure $
IR.SelectArgs
{ IR._saWhere = whereArg,
IR._saOrderBy = orderByArg,
IR._saLimit = limitArg,
IR._saOffset = offsetArg,
IR._saDistinct = Nothing
}
bsParser :: MonadParse m => Parser 'Both m ByteString
bsParser = encodeUtf8 <$> P.string
columnParser' ::
MonadBuildSchema 'MySQL r m n =>
ColumnType 'MySQL ->
GQL.Nullability ->
SchemaT r m (Parser 'Both n (ValueWithOrigin (ColumnValue 'MySQL)))
columnParser' columnType nullability = case columnType of
ColumnScalar scalarType ->
P.memoizeOn 'columnParser' (scalarType, nullability) $
peelWithOrigin . fmap (ColumnValue columnType) . possiblyNullable' scalarType nullability
<$> case scalarType of
MySQL.Decimal -> pure $ MySQL.DecimalValue <$> P.float
MySQL.Tiny -> pure $ MySQL.TinyValue <$> P.int
MySQL.Short -> pure $ MySQL.SmallValue <$> P.int
MySQL.Long -> pure $ MySQL.IntValue <$> P.int
MySQL.Float -> pure $ MySQL.FloatValue <$> P.float
MySQL.Double -> pure $ MySQL.DoubleValue <$> P.float
MySQL.Null -> pure $ MySQL.NullValue <$ P.string
MySQL.LongLong -> pure $ MySQL.BigValue <$> P.int
MySQL.Int24 -> pure $ MySQL.MediumValue <$> P.int
MySQL.Date -> pure $ MySQL.DateValue <$> P.string
MySQL.Year -> pure $ MySQL.YearValue <$> P.string
MySQL.Bit -> pure $ MySQL.BitValue <$> P.boolean
MySQL.String -> pure $ MySQL.VarcharValue <$> P.string
MySQL.VarChar -> pure $ MySQL.VarcharValue <$> P.string
MySQL.DateTime -> pure $ MySQL.DatetimeValue <$> P.string
MySQL.Blob -> pure $ MySQL.BlobValue <$> bsParser
MySQL.Timestamp -> pure $ MySQL.TimestampValue <$> P.string
_ -> do
name <- MySQL.mkMySQLScalarTypeName scalarType
let schemaType = P.TNamed P.NonNullable $ P.Definition name Nothing Nothing [] P.TIScalar
pure $
P.Parser
{ pType = schemaType,
pParser =
P.valueToJSON (P.toGraphQLType schemaType)
>=> either (P.parseErrorWith P.ParseFailed . toErrorMessage . qeError) pure . (MySQL.parseScalarValue scalarType)
}
ColumnEnumReference (EnumReference tableName enumValues customTableName) ->
case nonEmpty (HashMap.toList enumValues) of
Just enumValuesList ->
peelWithOrigin . fmap (ColumnValue columnType)
<$> enumParser' tableName enumValuesList customTableName nullability
Nothing -> throw400 ValidationFailed "empty enum values"
enumParser' ::
MonadBuildSchema 'MySQL r m n =>
TableName 'MySQL ->
NonEmpty (EnumValue, EnumValueInfo) ->
Maybe GQL.Name ->
GQL.Nullability ->
SchemaT r m (Parser 'Both n (ScalarValue 'MySQL))
enumParser' tableName enumValues customTableName nullability = do
enumName <- mkEnumTypeName @'MySQL tableName customTableName
pure $ possiblyNullable' MySQL.VarChar nullability $ P.enum enumName Nothing (mkEnumValue <$> enumValues)
where
mkEnumValue :: (EnumValue, EnumValueInfo) -> (P.Definition P.EnumValueInfo, RQL.ScalarValue 'MySQL)
mkEnumValue (RQL.EnumValue value, EnumValueInfo description) =
( P.Definition value (GQL.Description <$> description) Nothing [] P.EnumValueInfo,
MySQL.VarcharValue $ GQL.unName value
)
possiblyNullable' ::
(MonadParse m) =>
ScalarType 'MySQL ->
GQL.Nullability ->
Parser 'Both m (ScalarValue 'MySQL) ->
Parser 'Both m (ScalarValue 'MySQL)
possiblyNullable' _scalarType (GQL.Nullability isNullable)
| isNullable = fmap (fromMaybe MySQL.NullValue) . P.nullable
| otherwise = id
orderByOperators' :: NamingCase -> (GQL.Name, NonEmpty (P.Definition P.EnumValueInfo, (BasicOrderType 'MySQL, NullsOrderType 'MySQL)))
orderByOperators' _tCase =
(Name._order_by,) $
-- NOTE: NamingCase is not being used here as we don't support naming conventions for this DB
NE.fromList
[ ( define Name._asc "in ascending order, nulls first",
(MySQL.Asc, MySQL.NullsFirst)
),
( define Name._asc_nulls_first "in ascending order, nulls first",
(MySQL.Asc, MySQL.NullsFirst)
),
( define Name._asc_nulls_last "in ascending order, nulls last",
(MySQL.Asc, MySQL.NullsLast)
),
( define Name._desc "in descending order, nulls last",
(MySQL.Desc, MySQL.NullsLast)
),
( define Name._desc_nulls_first "in descending order, nulls first",
(MySQL.Desc, MySQL.NullsFirst)
),
( define Name._desc_nulls_last "in descending order, nulls last",
(MySQL.Desc, MySQL.NullsLast)
)
]
where
define name desc = P.Definition name (Just desc) Nothing [] P.EnumValueInfo
-- | TODO: Make this as thorough as the one for MSSQL/PostgreSQL
comparisonExps' ::
forall m n r.
MonadBuildSchema 'MySQL r m n =>
ColumnType 'MySQL ->
SchemaT r m (Parser 'Input n [ComparisonExp 'MySQL])
comparisonExps' = P.memoize 'comparisonExps $ \columnType -> do
-- see Note [Columns in comparison expression are never nullable]
typedParser <- columnParser columnType (GQL.Nullability False)
let name = P.getName typedParser <> Name.__MySQL_comparison_exp
desc =
GQL.Description $
"Boolean expression to compare columns of type "
<> P.getName typedParser
<<> ". All fields are combined with logical 'AND'."
pure $
P.object name (Just desc) $
catMaybes
<$> sequenceA
[ P.fieldOptional Name.__is_null Nothing (bool ANISNOTNULL ANISNULL <$> P.boolean),
P.fieldOptional Name.__eq Nothing (AEQ True . mkParameter <$> typedParser),
P.fieldOptional Name.__neq Nothing (ANE True . mkParameter <$> typedParser),
P.fieldOptional Name.__gt Nothing (AGT . mkParameter <$> typedParser),
P.fieldOptional Name.__lt Nothing (ALT . mkParameter <$> typedParser),
P.fieldOptional Name.__gte Nothing (AGTE . mkParameter <$> typedParser),
P.fieldOptional Name.__lte Nothing (ALTE . mkParameter <$> typedParser)
]
mysqlCountTypeInput ::
MonadParse n =>
Maybe (Parser 'Both n (Column 'MySQL)) ->
InputFieldsParser n (IR.CountDistinct -> CountType 'MySQL)
mysqlCountTypeInput = \case
Just columnEnum -> do
columns <- P.fieldOptional Name._columns Nothing $ P.list columnEnum
pure $ flip mkCountType columns
Nothing -> pure $ flip mkCountType Nothing
where
mkCountType :: IR.CountDistinct -> Maybe [Column 'MySQL] -> CountType 'MySQL
mkCountType _ Nothing = MySQL.StarCountable
mkCountType IR.SelectCountDistinct (Just cols) =
maybe MySQL.StarCountable MySQL.DistinctCountable $ nonEmpty cols
mkCountType IR.SelectCountNonDistinct (Just cols) =
maybe MySQL.StarCountable MySQL.NonNullFieldCountable $ nonEmpty cols

View File

@ -1,12 +0,0 @@
{-# LANGUAGE UndecidableInstances #-}
{-# OPTIONS_GHC -Wno-orphans #-}
-- This module houses type class instances on the MySQL backend that relate
-- to the Schema Cache.
module Hasura.Backends.MySQL.Instances.SchemaCache () where
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.BackendType (BackendType (MySQL))
import Hasura.RQL.Types.SchemaCacheTypes (GetAggregationPredicatesDeps)
instance (Backend 'MySQL) => GetAggregationPredicatesDeps 'MySQL

View File

@ -1,85 +0,0 @@
{-# OPTIONS_GHC -fno-warn-orphans #-}
module Hasura.Backends.MySQL.Instances.Transport (runQuery) where
import Control.Monad.Trans.Control
import Data.Aeson qualified as J
import Data.Text.Extended
import Hasura.Backends.DataConnector.Agent.Client (AgentLicenseKey)
import Hasura.Backends.MySQL.Instances.Execute ()
import Hasura.Base.Error
import Hasura.CredentialCache
import Hasura.EncJSON
import Hasura.GraphQL.Execute.Backend
import Hasura.GraphQL.Logging
import Hasura.GraphQL.Namespace (RootFieldAlias)
import Hasura.GraphQL.Transport.Backend
import Hasura.GraphQL.Transport.HTTP.Protocol
import Hasura.Logging qualified as L
import Hasura.Prelude
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.BackendType
import Hasura.SQL.AnyBackend (AnyBackend)
import Hasura.Server.Types (RequestId)
import Hasura.Session
import Hasura.Tracing
instance BackendTransport 'MySQL where
runDBQuery = runQuery
runDBQueryExplain = runQueryExplain
runDBMutation = error "runDBMutation: MySQL backend does not support this operation yet."
runDBSubscription = error "runDBSubscription: MySQL backend does not support this operation yet."
runDBStreamingSubscription = error "runDBStreamingSubscription: MySQL backend does not support this operation yet"
runQuery ::
( MonadIO m,
MonadBaseControl IO m,
MonadQueryLog m,
MonadTrace m,
MonadError QErr m
) =>
RequestId ->
GQLReqUnparsed ->
RootFieldAlias ->
UserInfo ->
L.Logger L.Hasura ->
Maybe (CredentialCache AgentLicenseKey) ->
SourceConfig 'MySQL ->
OnBaseMonad IdentityT (Maybe (AnyBackend ExecutionStats), EncJSON) ->
Maybe (PreparedQuery 'MySQL) ->
ResolvedConnectionTemplate 'MySQL ->
-- | Also return the time spent in the PG query; for telemetry.
m (DiffTime, EncJSON)
runQuery reqId query fieldName _userInfo logger _ _sourceConfig tx genSql _ = do
logQueryLog logger $ mkQueryLog query fieldName genSql reqId
withElapsedTime $
newSpan ("MySQL Query for root field " <>> fieldName) $
fmap snd (run tx)
runQueryExplain ::
( MonadIO m,
MonadBaseControl IO m,
MonadError QErr m,
MonadTrace m
) =>
Maybe (CredentialCache AgentLicenseKey) ->
DBStepInfo 'MySQL ->
m EncJSON
runQueryExplain _ (DBStepInfo _ _ _ action _) = fmap arResult (run action)
run :: (MonadIO m, MonadBaseControl IO m, MonadError QErr m, MonadTrace m) => OnBaseMonad IdentityT a -> m a
run = runIdentityT . runOnBaseMonad
mkQueryLog ::
GQLReqUnparsed ->
RootFieldAlias ->
Maybe (PreparedQuery 'MySQL) ->
RequestId ->
QueryLog
mkQueryLog gqlQuery fieldName preparedSql requestId =
-- @QueryLogKindDatabase Nothing@ means that the backend doesn't support connection templates
QueryLog gqlQuery ((fieldName,) <$> generatedQuery) requestId (QueryLogKindDatabase Nothing)
where
generatedQuery =
preparedSql <&> \queryString ->
GeneratedQuery queryString J.Null

View File

@ -1,160 +0,0 @@
{-# OPTIONS_GHC -fno-warn-orphans #-}
module Hasura.Backends.MySQL.Instances.Types () where
import Data.Aeson qualified as J
import Data.Pool qualified as Pool
import Data.Text.Casing qualified as C
import Database.MySQL.Base.Types qualified as MySQL
import Hasura.Backends.MySQL.Types qualified as MySQL
import Hasura.Base.Error
import Hasura.Prelude
import Hasura.RQL.DDL.Headers ()
import Hasura.RQL.Types.Backend
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.ResizePool
import Language.GraphQL.Draft.Syntax qualified as G
instance Backend 'MySQL where
type BackendConfig 'MySQL = ()
type BackendInfo 'MySQL = ()
type TableName 'MySQL = MySQL.TableName
type FunctionName 'MySQL = MySQL.FunctionName
type RawFunctionInfo 'MySQL = Void -- MySQL.FunctionName
type FunctionArgument 'MySQL = Void
type ConstraintName 'MySQL = MySQL.ConstraintName
type BasicOrderType 'MySQL = MySQL.Order
type NullsOrderType 'MySQL = MySQL.NullsOrder
type CountType 'MySQL = MySQL.Countable MySQL.Column
type Column 'MySQL = MySQL.Column
type ScalarValue 'MySQL = MySQL.ScalarValue
type ScalarType 'MySQL = MySQL.ScalarType -- DB.Type
type SQLExpression 'MySQL = MySQL.Expression
type ScalarSelectionArguments 'MySQL = Void
type BooleanOperators 'MySQL = Const Void
type ComputedFieldDefinition 'MySQL = Void
type FunctionArgumentExp 'MySQL = Const Void
type ComputedFieldImplicitArguments 'MySQL = Void
type ComputedFieldReturn 'MySQL = Void
type XComputedField 'MySQL = Void
type XRelay 'MySQL = Void
type XNodesAgg 'MySQL = XEnable
type ExtraTableMetadata 'MySQL = ()
type XEventTriggers 'MySQL = XDisable
type XNestedInserts 'MySQL = XDisable
type XStreamingSubscription 'MySQL = XDisable
type HealthCheckTest 'MySQL = Void
isComparableType :: ScalarType 'MySQL -> Bool
isComparableType = isNumType @'MySQL -- TODO: For now we only allow comparisons for numeric types
isNumType :: ScalarType 'MySQL -> Bool
isNumType = \case
MySQL.Decimal -> True
MySQL.Tiny -> True
MySQL.Short -> True
MySQL.Long -> True
MySQL.Float -> True
MySQL.Double -> True
MySQL.Null -> False
MySQL.Timestamp -> False
MySQL.LongLong -> True
MySQL.Int24 -> True
MySQL.Date -> False
MySQL.Time -> False
MySQL.DateTime -> False
MySQL.Year -> False
MySQL.NewDate -> False
MySQL.VarChar -> False
MySQL.Bit -> False
MySQL.NewDecimal -> True
MySQL.Enum -> False
MySQL.Set -> False
MySQL.TinyBlob -> False
MySQL.MediumBlob -> False
MySQL.LongBlob -> False
MySQL.Blob -> False
MySQL.VarString -> False
MySQL.String -> False
MySQL.Geometry -> False
MySQL.Json -> False
textToScalarValue :: Maybe Text -> ScalarValue 'MySQL
textToScalarValue = error "textToScalarValue: MySQL backend does not support this operation yet."
parseScalarValue :: ScalarType 'MySQL -> J.Value -> Either QErr (ScalarValue 'MySQL)
parseScalarValue = error "parseScalarValue: MySQL backend does not support this operation yet."
scalarValueToJSON :: ScalarValue 'MySQL -> J.Value
scalarValueToJSON = error "scalarValueToJSON: MySQL backend does not support this operation yet."
functionToTable :: FunctionName 'MySQL -> TableName 'MySQL
functionToTable = error "functionToTable: MySQL backend does not support this operation yet."
tableToFunction :: TableName 'MySQL -> FunctionName 'MySQL
tableToFunction = MySQL.FunctionName . MySQL.name
tableGraphQLName :: TableName 'MySQL -> Either QErr G.Name
tableGraphQLName MySQL.TableName {..} =
let gName = maybe "" (<> "_") schema <> name
in (G.mkName gName)
`onNothing` throw400 ValidationFailed ("TableName " <> gName <> " is not a valid GraphQL identifier")
functionGraphQLName :: FunctionName 'MySQL -> Either QErr G.Name
functionGraphQLName = error "functionGraphQLName: MySQL backend does not support this operation yet."
snakeCaseTableName :: TableName 'MySQL -> Text
snakeCaseTableName MySQL.TableName {name, schema} =
maybe "" (<> "_") schema <> name
getTableIdentifier :: TableName 'MySQL -> Either QErr C.GQLNameIdentifier
getTableIdentifier MySQL.TableName {..} = do
let gName = maybe "" (<> "_") schema <> name
gqlTableName <-
(G.mkName gName)
`onNothing` throw400 ValidationFailed ("TableName " <> gName <> " is not a valid GraphQL identifier")
pure $ C.fromAutogeneratedName gqlTableName
namingConventionSupport :: SupportedNamingCase
namingConventionSupport = OnlyHasuraCase
computedFieldFunction :: ComputedFieldDefinition 'MySQL -> FunctionName 'MySQL
computedFieldFunction = error "computedFieldFunction: MySQL backend does not support this operation yet"
computedFieldReturnType :: ComputedFieldReturn 'MySQL -> ComputedFieldReturnType 'MySQL
computedFieldReturnType = error "computedFieldReturnType: MySQL backend does not support this operation yet"
fromComputedFieldImplicitArguments :: v -> ComputedFieldImplicitArguments 'MySQL -> [FunctionArgumentExp 'MySQL v]
fromComputedFieldImplicitArguments = error "fromComputedFieldImplicitArguments: MySQL backend does not support this operation yet"
resizeSourcePools :: SourceConfig 'MySQL -> ServerReplicas -> IO SourceResizePoolSummary
resizeSourcePools sourceConfig serverReplicas = do
-- As of writing this, the MySQL isn't generally available (GA).
-- However, implementing the pool resize logic.
let pool = MySQL.scConnectionPool sourceConfig
maxConnections =
fromInteger $
toInteger $
MySQL._cscMaxConnections $
MySQL._cscPoolSettings $
MySQL.scConfig sourceConfig
-- Resize the pool max resources
Pool.resizePool pool (maxConnections `div` getServerReplicasInt serverReplicas)
-- Trim pool by destroying excess resources, if any
Pool.tryTrimPool pool
-- Return the summary. Only the primary pool is resized
pure $
SourceResizePoolSummary
{ _srpsPrimaryResized = True,
_srpsReadReplicasResized = False,
_srpsConnectionSet = []
}
defaultTriggerOnReplication = Nothing
instance HasSourceConfiguration 'MySQL where
type SourceConfig 'MySQL = MySQL.SourceConfig
type SourceConnConfiguration 'MySQL = MySQL.ConnSourceConfig
sourceConfigNumReadReplicas = const 0 -- not supported
sourceConfigConnectonTemplateEnabled = const False -- not supported

View File

@ -1,227 +0,0 @@
{-# LANGUAGE TemplateHaskell #-}
module Hasura.Backends.MySQL.Meta
( getMetadata,
)
where
import Control.Exception (throw)
import Data.ByteString.Char8 qualified as B8
import Data.FileEmbed (embedFile, makeRelativeToProject)
import Data.HashMap.Strict qualified as HashMap
import Data.HashMap.Strict.NonEmpty qualified as NEHashMap
import Data.HashSet qualified as HS
import Data.Sequence.NonEmpty qualified as NESeq
import Data.String (fromString)
import Database.MySQL.Base (Connection)
import Database.MySQL.Base.Types (Field (..))
import Database.MySQL.Simple (Only (Only), query)
import Database.MySQL.Simple.QueryResults (QueryResults (..), convertError)
import Database.MySQL.Simple.Result (Result, ResultError (..), convert)
import Hasura.Backends.MySQL.Instances.Types ()
import Hasura.Backends.MySQL.Types
import Hasura.Prelude
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Column
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.Table
import Language.GraphQL.Draft.Syntax qualified as G
getMetadata :: ConnSourceConfig -> Connection -> IO (DBTablesMetadata 'MySQL)
getMetadata ConnSourceConfig {_cscDatabase} scConnection = do
let sql = $(makeRelativeToProject "src-rsr/mysql_table_metadata.sql" >>= embedFile)
results :: [InformationSchema] <- query scConnection (fromString . B8.unpack $ sql) (Only _cscDatabase)
pure (mkMetadata results)
mkMetadata :: [InformationSchema] -> DBTablesMetadata 'MySQL
mkMetadata = foldr mergeMetadata HashMap.empty
mergeMetadata :: InformationSchema -> DBTablesMetadata 'MySQL -> DBTablesMetadata 'MySQL
mergeMetadata InformationSchema {..} =
HashMap.insertWith
mergeDBTableMetadata
(TableName {name = isTableName, schema = pure isTableSchema})
$ DBTableMetadata
{ _ptmiOid = OID 0,
_ptmiColumns =
[ RawColumnInfo
{ rciName = Column isColumnName,
rciPosition = fromIntegral isOrdinalPosition,
rciType = parseMySQLScalarType isColumnType, -- TODO: This needs to become more precise by considering Field length and character-set
rciIsNullable = isIsNullable == "YES", -- ref: https://dev.mysql.com/doc/refman/8.0/en/information-schema-columns-table.html
rciDescription = Just $ G.Description isColumnComment,
rciMutability = ColumnMutability {_cmIsInsertable = True, _cmIsUpdatable = True}
}
],
_ptmiPrimaryKey =
if isColumnKey == PRI
then
Just $
PrimaryKey
( Constraint
(ConstraintName $ fromMaybe "" isConstraintName)
(OID $ fromIntegral $ fromMaybe 0 isConstraintOrdinalPosition)
)
(NESeq.singleton (Column isColumnName))
else Nothing,
_ptmiUniqueConstraints =
if isColumnKey == UNI
then
HS.singleton
( UniqueConstraint
{ _ucConstraint =
Constraint
(ConstraintName $ fromMaybe "" isConstraintName)
(OID $ fromIntegral $ fromMaybe 0 isConstraintOrdinalPosition),
_ucColumns = HS.singleton (Column isColumnName)
}
)
else HS.empty,
_ptmiForeignKeys =
if isColumnKey == MUL
then
HS.singleton
( ForeignKeyMetadata
( ForeignKey
( Constraint
(ConstraintName $ fromMaybe "" isConstraintName)
(OID $ fromIntegral $ fromMaybe 0 isConstraintOrdinalPosition)
)
( TableName
{ name = (fromMaybe "" isReferencedTableName),
schema = isReferencedTableSchema
}
)
( NEHashMap.singleton
(Column isColumnName)
(Column $ fromMaybe "" isReferencedColumnName)
)
)
)
else HS.empty,
_ptmiViewInfo = Nothing,
_ptmiDescription = Nothing,
_ptmiExtraTableMetadata = (),
_ptmiCustomObjectTypes = mempty
}
mergeDBTableMetadata :: DBTableMetadata 'MySQL -> DBTableMetadata 'MySQL -> DBTableMetadata 'MySQL
mergeDBTableMetadata new existing =
DBTableMetadata
{ _ptmiOid = OID 0,
_ptmiColumns = _ptmiColumns existing <> _ptmiColumns new,
_ptmiPrimaryKey = _ptmiPrimaryKey existing <|> _ptmiPrimaryKey new, -- Only one column can be a PRIMARY KEY, so this is just a courtesy choice.
_ptmiUniqueConstraints = _ptmiUniqueConstraints existing <> _ptmiUniqueConstraints new, -- union
_ptmiForeignKeys = _ptmiForeignKeys existing <> _ptmiForeignKeys new, -- union
_ptmiViewInfo = _ptmiViewInfo existing <|> _ptmiViewInfo new,
_ptmiDescription = _ptmiDescription existing <|> _ptmiDescription new,
_ptmiExtraTableMetadata = (),
_ptmiCustomObjectTypes = mempty
}
data InformationSchema = InformationSchema
{ isTableSchema :: Text,
isTableName :: Text,
isColumnName :: Text,
isOrdinalPosition :: Word,
isColumnDefault :: Maybe Text,
isIsNullable :: Text,
isDataType :: Maybe Text,
isColumnType :: Text,
isColumnKey :: InformationSchemaColumnKey,
isColumnComment :: Text,
isConstraintName :: Maybe Text,
isConstraintOrdinalPosition :: Maybe Word,
isPositionInUniqueConstraint :: Maybe Word,
isReferencedTableSchema :: Maybe Text,
isReferencedTableName :: Maybe Text,
isReferencedColumnName :: Maybe Text
}
deriving (Show, Eq, Generic)
instance QueryResults InformationSchema where
convertResults
[ fisTableSchema,
fisTableName,
fisColumnName,
fisOrdinalPosition,
fisColumnDefault,
fisIsNullable,
fisDataType,
fisColumnType,
fisColumnKey,
fisColumnComment,
fisConstraintName,
fisConstraintOrdinalPosition,
fisPositionInUniqueConstraint,
fisReferencedTableSchema,
fisReferencedTableName,
fisReferencedColumnName
]
[ visTableSchema,
visTableName,
visColumnName,
visOrdinalPosition,
visColumnDefault,
visIsNullable,
visDataType,
visColumnType,
visColumnKey,
visColumnComment,
visConstraintName,
visConstraintOrdinalPosition,
visPositionInUniqueConstraint,
visReferencedTableSchema,
visReferencedTableName,
visReferencedColumnName
] =
InformationSchema
(convert fisTableSchema visTableSchema)
(convert fisTableName visTableName)
(convert fisColumnName visColumnName)
(convert fisOrdinalPosition visOrdinalPosition)
(convert fisColumnDefault visColumnDefault)
(convert fisIsNullable visIsNullable)
(convert fisDataType visDataType)
(convert fisColumnType visColumnType)
(convert fisColumnKey visColumnKey)
(convert fisColumnComment visColumnComment)
(convert fisConstraintName visConstraintName)
(convert fisConstraintOrdinalPosition visConstraintOrdinalPosition)
(convert fisPositionInUniqueConstraint visPositionInUniqueConstraint)
(convert fisReferencedTableSchema visReferencedTableSchema)
(convert fisReferencedTableName visReferencedTableName)
(convert fisReferencedColumnName visReferencedColumnName)
convertResults fs vs = convertError fs vs 16
data InformationSchemaColumnKey
= PRI
| UNI
| MUL
| -- | This field isn't NULLable and uses empty strings, by the looks of it.
BLANK
deriving (Show, Read, Eq, Generic)
instance Result InformationSchemaColumnKey where
convert f mbs =
case mbs of
Nothing ->
throw $
UnexpectedNull
(show $ fieldType f)
"InformationSchemaColumnKey"
(B8.unpack $ fieldName f)
"COLUMN_KEY in INFORMATION_SCHEMA cannot be NULL"
Just bs -> case bs of
-- Could have used 'readMaybe' here, but we need the specific errors.
"PRI" -> PRI -- primary key
"UNI" -> UNI -- unique key
"MUL" -> MUL -- foreign key (`MUL`tiple allowed, non-unique key)
"" -> BLANK
x ->
throw $
ConversionFailed
(show $ fieldType f)
"InformationSchemaColumnKey"
(B8.unpack $ fieldName f)
("COLUMN_KEY in INFORMATION_SCHEMA has value extraneous to the expected ENUM: " <> B8.unpack x)

View File

@ -1,63 +0,0 @@
-- | Planning MySQL queries and subscriptions.
module Hasura.Backends.MySQL.Plan
( planQuery,
queryToActionForest,
)
where
import Control.Monad.Validate
import Data.Aeson qualified as J
import Data.ByteString.Lazy (toStrict)
import Data.Text.Extended
import Data.Tree
import Hasura.Backends.MySQL.DataLoader.Plan qualified as DataLoader
import Hasura.Backends.MySQL.FromIr
import Hasura.Backends.MySQL.Types
import Hasura.Base.Error
import Hasura.Prelude hiding (first)
import Hasura.RQL.IR
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Column qualified as RQL
import Hasura.Session
-- | Plan the query and then produce a forest of actions for the executor.
queryToActionForest ::
MonadError QErr m =>
UserInfo ->
QueryDB 'MySQL Void (UnpreparedValue 'MySQL) ->
m (DataLoader.HeadAndTail, Forest DataLoader.PlannedAction)
queryToActionForest userInfo qrf = do
select <- planQuery (_uiSession userInfo) qrf
let (!headAndTail, !plannedActionsList) =
DataLoader.runPlan
(DataLoader.planSelectHeadAndTail Nothing Nothing select)
!actionsForest = DataLoader.actionsForest id plannedActionsList
pure (headAndTail, actionsForest)
planQuery ::
MonadError QErr m =>
SessionVariables ->
QueryDB 'MySQL Void (UnpreparedValue 'MySQL) ->
m Select
planQuery sessionVariables queryDB = do
rootField <- traverse (prepareValueQuery sessionVariables) queryDB
runValidate (runFromIr (fromRootField rootField))
`onLeft` (throw400 NotSupported . tshow)
-- | Prepare a value without any query planning; we just execute the
-- query with the values embedded.
prepareValueQuery ::
MonadError QErr m =>
SessionVariables ->
UnpreparedValue 'MySQL ->
m Expression
prepareValueQuery sessionVariables =
\case
UVLiteral x -> pure x
UVSession -> pure $ ValueExpression $ BinaryValue $ toStrict $ J.encode sessionVariables
UVParameter _ RQL.ColumnValue {..} -> pure $ ValueExpression cvValue
UVSessionVar _typ sessionVariable -> do
value <-
getSessionVariableValue sessionVariable sessionVariables
`onNothing` throw400 NotFound ("missing session variable: " <>> sessionVariable)
pure $ ValueExpression $ TextValue value

View File

@ -1,52 +0,0 @@
{-# LANGUAGE TemplateHaskell #-}
module Hasura.Backends.MySQL.SQL
( runSQL,
RunSQL (..),
)
where
import Data.Aeson qualified as J
import Data.Aeson.TH
import Data.ByteString hiding (null, reverse)
import Data.Pool (withResource)
import Data.String (fromString)
import Data.Text qualified as T
import Data.Text.Encoding (decodeUtf8With)
import Data.Text.Encoding.Error (lenientDecode)
import Database.MySQL.Base (fetchFields, query, storeResult)
import Database.MySQL.Base.Types (Field (fieldName))
import Hasura.Backends.MySQL.Connection (fetchAllRows)
import Hasura.Backends.MySQL.Types (SourceConfig (..))
import Hasura.Base.Error
import Hasura.EncJSON
import Hasura.Prelude
import Hasura.RQL.DDL.Schema (RunSQLRes (..))
import Hasura.RQL.Types.BackendType
import Hasura.RQL.Types.Common
import Hasura.RQL.Types.Metadata
import Hasura.RQL.Types.SchemaCache
import Hasura.RQL.Types.SchemaCache.Build
data RunSQL = RunSQL
{ _Sql :: Text,
_Source :: SourceName
}
deriving (Show, Eq)
$(deriveJSON hasuraJSON ''RunSQL)
runSQL :: (MonadIO m, CacheRWM m, MonadError QErr m, MetadataM m) => RunSQL -> m EncJSON
runSQL (RunSQL sql source) = do
pool <- scConnectionPool <$> askSourceConfig @'MySQL source
result :: [[Maybe ByteString]] <- liftIO $
withResource pool $ \conn -> do
query conn (fromString . T.unpack $ sql)
r <- storeResult conn
fieldNames <- fmap (Just . fieldName) <$> fetchFields r -- fieldNames as Maybes for convenience
rows <- fetchAllRows r
pure (fieldNames : rows)
pure . encJFromJValue $
if null result
then RunSQLRes "CommandOK" J.Null
else RunSQLRes "TuplesOk" . J.toJSON . (fmap . fmap . fmap) (decodeUtf8With lenientDecode) $ result

View File

@ -1,13 +0,0 @@
module Hasura.Backends.MySQL.Schema.Introspection
( listAllTables,
)
where
import Hasura.Backends.MySQL.Types (TableName)
import Hasura.Base.Error (QErr, throw500)
import Hasura.Prelude
import Hasura.RQL.Types.Common (SourceName)
-- | List all tables, tracked or untracked, on a given data source.
listAllTables :: MonadError QErr m => SourceName -> m [TableName]
listAllTables _ = throw500 "listAllTables: not implemented for MySQL backend"

View File

@ -1,415 +0,0 @@
{-# OPTIONS_GHC -fno-warn-orphans #-}
-- | Convert the simple AST to an SQL query, ready to be passed
-- to the mysql package's query/exec functions.
module Hasura.Backends.MySQL.ToQuery
( Printer,
toQueryPretty,
fromSelect,
toQueryFlat,
Query (..),
renderBuilderPretty,
runBuilderPretty,
)
where
import Data.ByteString (ByteString)
import Data.HashMap.Strict.InsOrd qualified as InsOrdHashMap
import Data.List (intersperse)
import Data.String
import Data.Text qualified as T
import Data.Text.Encoding qualified as T
import Data.Text.Lazy.Builder qualified as LT
import Data.Tuple (swap)
import Hasura.Backends.MySQL.Types
import Hasura.Prelude hiding (GT, LT)
newtype Query = Query {unQuery :: ByteString} deriving (Show, Eq, Monoid, Semigroup)
data Printer
= SeqPrinter [Printer]
| SepByPrinter Printer [Printer]
| NewlinePrinter
| QueryPrinter Query
| IndentPrinter Int Printer
deriving (Show, Eq)
instance IsString Printer where
fromString = QueryPrinter . Query . fromString
(<+>) :: Printer -> Printer -> Printer
(<+>) x y = SeqPrinter [x, y]
-- Printer generators
fromExpression :: Expression -> Printer
fromExpression =
\case
ValueExpression value -> QueryPrinter (fromScalarType value)
AndExpression xs ->
case xs of
[] -> truePrinter
_ ->
SepByPrinter
(NewlinePrinter <+> "AND ")
(fmap (\x -> "(" <+> fromExpression x <+> ")") (toList xs))
OrExpression xs ->
case xs of
[] -> falsePrinter
_ ->
SepByPrinter
(NewlinePrinter <+> "OR ")
(fmap (\x -> "(" <+> fromExpression x <+> ")") (toList xs))
NotExpression expression -> "NOT " <+> fromExpression expression
ExistsExpression sel -> "EXISTS (" <+> fromSelect sel <+> ")"
InExpression x xs ->
fromExpression x <+> " IN " <+> SeqPrinter (fmap fromExpression xs)
ColumnExpression fieldName -> fromFieldName fieldName
MethodExpression field method args ->
fromExpression field
<+> "."
<+> fromString (T.unpack method)
<+> "("
<+> SeqPrinter (map fromExpression args)
<+> ")"
OpExpression op x y ->
"("
<+> fromExpression x
<+> ") "
<+> fromOp op
<+> " ("
<+> fromExpression y
<+> ")"
fromScalarType :: ScalarValue -> Query
fromScalarType = \case
BigValue v -> Query $ fromString $ show v
BinaryValue v -> Query $ fromString $ show v
BitValue v -> Query $ fromString $ show v
BlobValue v -> Query $ fromString $ show v
CharValue v -> Query $ fromString $ show v
DatetimeValue v -> Query $ fromString $ show v
DateValue v -> Query $ fromString $ show v
DecimalValue v -> Query $ fromString $ show v
DoubleValue v -> Query $ fromString $ show v
EnumValue v -> Query $ fromString $ show v
FloatValue v -> Query $ fromString $ show v
GeometrycollectionValue v -> Query $ fromString $ show v
GeometryValue v -> Query $ fromString $ show v
IntValue v -> Query $ fromString $ show v
JsonValue v -> Query $ fromString $ show v
LinestringValue v -> Query $ fromString $ show v
MediumValue v -> Query $ fromString $ show v
MultilinestringValue v -> Query $ fromString $ show v
MultipointValue v -> Query $ fromString $ show v
MultipolygonValue v -> Query $ fromString $ show v
NullValue -> Query $ fromString "NULL"
NumericValue v -> Query $ fromString $ show v
PointValue v -> Query $ fromString $ show v
PolygonValue v -> Query $ fromString $ show v
SmallValue v -> Query $ fromString $ show v
TextValue v -> Query $ fromString $ show v
TimestampValue v -> Query $ fromString $ show v
TimeValue v -> Query $ fromString $ show v
TinyValue v -> Query $ fromString $ show v
VarbinaryValue v -> Query $ fromString $ show v
VarcharValue v -> Query $ fromString $ show v
YearValue v -> Query $ fromString $ show v
other -> error $ "fromscalartype: not implemented " <> show other
fromOp :: Op -> Printer
fromOp =
\case
LT -> "<"
GT -> ">"
GTE -> ">="
LTE -> "<="
IN -> "IN"
NIN -> "NOT IN"
LIKE -> "LIKE"
NLIKE -> "NOT LIKE"
EQ' -> "="
NEQ' -> "!="
fromFieldName :: FieldName -> Printer
fromFieldName (FieldName {..}) =
fromNameText fNameEntity <+> "." <+> fromNameText fName
fromSelect :: Select -> Printer
fromSelect Select {..} =
SepByPrinter
NewlinePrinter
$ [ "SELECT "
<+> IndentPrinter
7
( SepByPrinter
("," <+> NewlinePrinter)
(map fromProjection (toList selectProjections))
),
"FROM " <+> IndentPrinter 5 (fromFrom selectFrom),
fromWhere selectWhere,
fromOrderBys selectOrderBy,
fromOffsetAndLimit selectSqlTop selectSqlOffset
]
-- https://dev.mysql.com/doc/refman/5.7/en/select.html
fromOffsetAndLimit :: Top -> Maybe Int -> Printer
fromOffsetAndLimit NoTop Nothing = ""
fromOffsetAndLimit NoTop (Just offset) =
SeqPrinter
[ "LIMIT " <+> fromString (show (maxBound :: Int)),
IndentPrinter 9 (SepByPrinter NewlinePrinter [" OFFSET " <+> fromString (show offset)])
]
fromOffsetAndLimit (Top val) Nothing = SeqPrinter ["LIMIT " <+> fromString (show val)]
fromOffsetAndLimit (Top val) (Just offset) =
SeqPrinter
[ "LIMIT " <+> fromString (show val),
IndentPrinter 9 (SepByPrinter NewlinePrinter [" OFFSET " <+> fromString (show offset)])
]
fromOrderBys ::
Maybe (NonEmpty OrderBy) -> Printer
fromOrderBys Nothing = ""
fromOrderBys (Just orderBys) =
SeqPrinter
[ "ORDER BY ",
IndentPrinter
9
( SepByPrinter
NewlinePrinter
[ SepByPrinter
("," <+> NewlinePrinter)
(concatMap fromOrderBy (toList orderBys))
]
)
]
fromOrderBy :: OrderBy -> [Printer]
fromOrderBy OrderBy {..} =
[ fromNullsOrder orderByFieldName orderByNullsOrder,
-- Above: This doesn't do anything when using text, ntext or image
-- types. See below on CAST commentary.
wrapNullHandling (fromFieldName orderByFieldName)
<+> " "
<+> fromOrder orderByOrder
]
where
wrapNullHandling inner = inner
fromOrder :: Order -> Printer
fromOrder =
\case
Asc -> "ASC"
Desc -> "DESC"
-- Source <https://gregrs-uk.github.io/2011-02-02/mysql-order-by-with-nulls-first-or-last/>
fromNullsOrder :: FieldName -> NullsOrder -> Printer
fromNullsOrder fieldName =
\case
NullsAnyOrder -> ""
-- ISNULL(NULL)=1, ISNULL(_) = 0 -- therefore we need DESC to put
-- nulls first.
NullsFirst -> "ISNULL(" <+> fromFieldName fieldName <+> ") DESC"
NullsLast -> "ISNULL(" <+> fromFieldName fieldName <+> ") ASC"
fromProjection :: Projection -> Printer
fromProjection =
\case
ExpressionProjection aliasedExpression ->
fromAliased (fmap fromExpression aliasedExpression)
FieldNameProjection aliasedFieldName ->
fromAliased (fmap fromFieldName aliasedFieldName)
AggregateProjection aliasedAggregate ->
fromAliased (fmap fromAggregate aliasedAggregate)
AggregateProjections aliasedAggregates ->
fromAliased
( fmap
( \aggs ->
"STRUCT("
<+> IndentPrinter
7
( SepByPrinter
", "
(fmap (fromAliased . fmap fromAggregate) (toList aggs))
)
<+> ")"
)
aliasedAggregates
)
StarProjection -> "*"
EntityProjection aliasedEntity ->
fromAliased
( fmap
( \(fields :: [(FieldName, FieldOrigin)]) ->
-- Example:
-- STRUCT(
-- IFNULL(
-- `aa_articles1`.`aggregate`,
-- STRUCT(0 as count, struct(null as id) as sum)
-- ) as aggregate
-- ) AS `articles_aggregate`
--
-- The (AS `articles_aggregate`) part at the end is rendered by 'fromAliased' evaluating
-- at the root of this branch, and not by anything below
"STRUCT("
<+> ( SepByPrinter
", "
( fields
<&> \(fieldName@FieldName {..}, fieldOrigin :: FieldOrigin) ->
"IFNULL("
<+> fromFieldName fieldName
<+> ", "
<+> fromFieldOrigin fieldOrigin
<+> ") AS "
<+> fromNameText fName
)
)
<+> ")"
)
aliasedEntity
)
ArrayEntityProjection entityAlias aliasedEntity ->
fromAliased
( fmap
( \aggs ->
"ARRAY(SELECT AS STRUCT "
<+> IndentPrinter
7
(SepByPrinter ", " (fmap fromFieldNameNaked (toList aggs)))
<+> " FROM "
<+> fromNameText (entityAliasText entityAlias)
<+> ".agg)"
)
aliasedEntity
)
where
fromFieldNameNaked :: FieldName -> Printer
fromFieldNameNaked (FieldName {..}) =
fromNameText fName
fromAggregate :: Aggregate -> Printer
fromAggregate =
\case
CountAggregate countable -> "COUNT(" <+> fromCountable countable <+> ")"
OpAggregate text args ->
QueryPrinter (Query $ fromString $ show text)
<+> "("
<+> SepByPrinter ", " (map fromExpression (toList args))
<+> ")"
TextAggregate text -> fromExpression (ValueExpression (TextValue text))
fromCountable :: Countable FieldName -> Printer
fromCountable =
\case
StarCountable -> "*"
NonNullFieldCountable fields ->
SepByPrinter ", " (map fromFieldName (toList fields))
DistinctCountable fields ->
"DISTINCT "
<+> SepByPrinter ", " (map fromFieldName (toList fields))
fromWhere :: Where -> Printer
fromWhere =
\case
Where [] -> ""
Where expressions ->
"WHERE "
<+> IndentPrinter 6 (fromExpression (AndExpression expressions))
fromFrom :: From -> Printer
fromFrom =
\case
FromQualifiedTable aliasedQualifiedTableName ->
fromAliased (fmap fromTableName aliasedQualifiedTableName)
FromSelect select -> fromAliased (fmap (parens . fromSelect) select)
parens :: Printer -> Printer
parens x = "(" <+> IndentPrinter 1 x <+> ")"
fromTableName :: TableName -> Printer
fromTableName TableName {name, schema} =
maybe "" ((<+> ".") . fromNameText) schema <+> fromNameText name
fromAliased :: Aliased Printer -> Printer
fromAliased Aliased {..} =
aliasedThing
<+> ((" AS " <+>) . fromNameText) aliasedAlias
fromNameText :: Text -> Printer
fromNameText t = QueryPrinter (Query . fromString . T.unpack $ t)
truePrinter :: Printer
truePrinter = "TRUE"
falsePrinter :: Printer
falsePrinter = "FALSE"
--------------------------------------------------------------------------------
-- Basic printing API
toQueryFlat :: Printer -> Query
toQueryFlat = go 0
where
go level =
\case
QueryPrinter q -> q
SeqPrinter xs -> mconcat (filter notEmpty (map (go level) xs))
SepByPrinter x xs ->
mconcat
(intersperse (go level x) (filter notEmpty (map (go level) xs)))
NewlinePrinter -> Query " "
IndentPrinter n p -> go (level + n) p
notEmpty = (/= mempty)
toQueryPretty :: Printer -> Query
toQueryPretty = go 0
where
go level =
\case
QueryPrinter q -> q
SeqPrinter xs -> mconcat (filter notEmpty (map (go level) xs))
SepByPrinter x xs ->
mconcat
(intersperse (go level x) (filter notEmpty (map (go level) xs)))
NewlinePrinter -> Query $ fromString $ show $ "\n" <> indentation level
IndentPrinter n p -> go (level + n) p
indentation n = T.replicate n " "
notEmpty = (/= mempty)
-- | Produces a query with holes, and a mapping for each
renderBuilderPretty :: Printer -> (LT.Builder, InsOrdHashMap Int ScalarValue)
renderBuilderPretty =
second (InsOrdHashMap.fromList . map swap . InsOrdHashMap.toList)
. flip runState mempty
. runBuilderPretty
runBuilderPretty :: Printer -> State (InsOrdHashMap ScalarValue Int) LT.Builder
runBuilderPretty = go 0
where
go level =
\case
SeqPrinter xs -> fmap (mconcat . filter notEmpty) (mapM (go level) xs)
SepByPrinter x xs -> do
i <- go level x
fmap (mconcat . intersperse i . filter notEmpty) (mapM (go level) xs)
NewlinePrinter -> pure ("\n" <> indentation level)
IndentPrinter n p -> go (level + n) p
QueryPrinter Query {unQuery = q} -> pure . LT.fromText . T.decodeUtf8 $ q
indentation n = LT.fromText (T.replicate n " ")
notEmpty = (/= mempty)
fromFieldOrigin :: FieldOrigin -> Printer
fromFieldOrigin = \case
NoOrigin -> "NULL"
AggregateOrigin aliasedAggregates ->
"STRUCT("
<+>
-- Example: "0 AS count, STRUCT(NULL AS id) AS sum"
SepByPrinter ", " (fromAliased . fmap fromNullAggregate <$> aliasedAggregates)
<+> ")"
fromNullAggregate :: Aggregate -> Printer
fromNullAggregate = \case
CountAggregate _ -> "0"
OpAggregate _text _exp -> "NULL"
TextAggregate _text -> "NULL"

View File

@ -1,8 +0,0 @@
-- | Types for MySQL
module Hasura.Backends.MySQL.Types
( module Hasura.Backends.MySQL.Types.Internal,
)
where
import Hasura.Backends.MySQL.Types.Instances ()
import Hasura.Backends.MySQL.Types.Internal

View File

@ -1,270 +0,0 @@
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE TemplateHaskell #-}
{-# OPTIONS_GHC -fno-warn-orphans #-}
-- | Instances that're slow to compile.
module Hasura.Backends.MySQL.Types.Instances () where
import Autodocodec
( HasCodec (codec),
dimapCodec,
optionalFieldWithDefault',
parseAlternative,
requiredField,
requiredField',
)
import Autodocodec qualified as AC
import Autodocodec.Extended (optionalFieldOrIncludedNull')
import Control.DeepSeq
import Data.Aeson qualified as J
import Data.Aeson.Casing qualified as J
import Data.Aeson.Extended
import Data.Aeson.TH qualified as J
import Data.Aeson.Types
import Data.Pool
import Data.Text.Extended (ToTxt (..))
import Database.MySQL.Base (Connection)
import Database.MySQL.Base.Types qualified as MySQLTypes (Type (..))
import Hasura.Backends.MySQL.Types.Internal
import Hasura.Base.ErrorValue qualified as ErrorValue
import Hasura.Base.ToErrorValue
import Hasura.Prelude
import Language.Haskell.TH
import Language.Haskell.TH.Syntax
----
---- Countable instances
-- These instances must be defined before the TH-defined instances below.
deriving instance Generic (Countable n)
instance Hashable n => Hashable (Countable n)
deriving instance Eq n => Eq (Countable n)
deriving instance Show n => Show (Countable n)
deriving instance Data n => Data (Countable n)
instance NFData n => NFData (Countable n)
instance ToJSON n => ToJSON (Countable n)
instance FromJSON n => FromJSON (Countable n)
----
---- TH-defined instances
$( concat <$> for
[ ''ScalarType
]
\name ->
[d|
deriving instance Generic $(conT name)
instance Hashable $(conT name)
deriving instance Data $(conT name)
instance NFData $(conT name)
|]
)
$( fmap concat $ for
[''Aliased]
\name ->
[d|
deriving instance Generic ($(conT name) a)
instance Hashable a => Hashable ($(conT name) a)
deriving instance Eq a => Eq ($(conT name) a)
instance NFData a => NFData ($(conT name) a)
deriving instance Show a => Show ($(conT name) a)
deriving instance Functor $(conT name)
deriving instance Data a => Data ($(conT name) a)
|]
)
$( concat <$> for
[ ''Where,
''Aggregate,
''EntityAlias,
''OrderBy,
''JoinAlias,
''Reselect,
''Expression,
''NullsOrder,
''Order,
''Top,
''TableName,
''Select,
''FieldName,
''FieldOrigin,
''Projection,
''From,
''Join,
''Op,
''JoinType
]
\name ->
[d|
deriving instance Generic $(conT name)
instance Hashable $(conT name)
deriving instance Eq $(conT name)
deriving instance Show $(conT name)
deriving instance Data $(conT name)
instance NFData $(conT name)
|]
)
$( concat <$> for
[''TableName, ''ScalarType]
\name -> [d|deriving instance Ord $(conT name)|]
)
$( concat <$> for
[''TableName, ''NullsOrder, ''Order]
\name -> [d|deriving instance Lift $(conT name)|]
)
$( concat <$> for
[''Order, ''NullsOrder, ''ScalarType, ''FieldName]
\name ->
[d|
instance ToJSON $(conT name) where
toJSON = genericToJSON hasuraJSON
instance FromJSON $(conT name) where
parseJSON = genericParseJSON hasuraJSON
|]
)
----
---- Manually-defined instances
instance ToTxt TableName where
toTxt TableName {..} = name
instance HasCodec TableName where
codec = parseAlternative objCodec strCodec
where
objCodec =
AC.object "MySQLTableName" $
TableName
<$> requiredField' "name"
AC..= name
<*> optionalFieldOrIncludedNull' "schema"
AC..= schema
strCodec = flip TableName Nothing <$> codec
instance FromJSON TableName where
parseJSON v@(String _) =
TableName <$> parseJSON v <*> pure Nothing
parseJSON (Object o) =
TableName
<$> o
.: "name"
<*> o
.:? "schema"
parseJSON _ =
fail "expecting a string/object for TableName"
instance ToJSON TableName where
toJSON TableName {..} = object ["name" .= name, "schema" .= schema]
instance ToJSONKey TableName where
toJSONKey =
toJSONKeyText $ \(TableName {schema, name}) ->
maybe "" (<> ".") schema <> name
instance HasCodec Column where
codec = dimapCodec Column unColumn codec
instance ToJSONKey ScalarType
instance ToTxt ScalarType where
toTxt = tshow
instance ToErrorValue ScalarType where
toErrorValue = ErrorValue.squote . tshow
deriving newtype instance Monoid Where
deriving newtype instance Semigroup Where
instance Monoid Top where
mempty = NoTop
instance Semigroup Top where
(<>) :: Top -> Top -> Top
(<>) NoTop x = x
(<>) x NoTop = x
(<>) (Top x) (Top y) = Top (min x y)
instance HasCodec ConnPoolSettings where
codec =
AC.object "MySQLConnPoolSettings" $
ConnPoolSettings
<$> optionalFieldWithDefault' "idle_timeout" (_cscIdleTimeout defaultConnPoolSettings)
AC..= _cscIdleTimeout
<*> optionalFieldWithDefault' "max_connections" (_cscMaxConnections defaultConnPoolSettings)
AC..= _cscMaxConnections
instance J.FromJSON ConnPoolSettings where
parseJSON = J.withObject "MySQL pool settings" $ \o ->
ConnPoolSettings
<$> o J..:? "idle_timeout" J..!= _cscIdleTimeout defaultConnPoolSettings
<*> o J..:? "max_connections" J..!= _cscMaxConnections defaultConnPoolSettings
$(J.deriveToJSON hasuraJSON ''ConnPoolSettings)
instance J.ToJSON Expression where
toJSON (ValueExpression scalarValue) = J.toJSON scalarValue
toJSON expr = error $ "ToJSON: not implemented" <> show expr -- https://github.com/hasura/graphql-engine-mono/issues/1951
instance J.FromJSON Expression where
parseJSON value = ValueExpression <$> J.parseJSON value
instance HasCodec ConnSourceConfig where
codec =
AC.object "MySQLConnSourceConfig" $
ConnSourceConfig
<$> requiredField "host" hostDoc
AC..= _cscHost
<*> requiredField' "port"
AC..= _cscPort
<*> requiredField' "user"
AC..= _cscUser
<*> requiredField' "password"
AC..= _cscPassword
<*> requiredField' "database"
AC..= _cscDatabase
<*> requiredField' "pool_settings"
AC..= _cscPoolSettings
where
hostDoc = "Works with `127.0.0.1` but not with `localhost`: https://mariadb.com/kb/en/troubleshooting-connection-issues/#localhost-and"
$(J.deriveJSON (J.aesonDrop 4 J.snakeCase) {J.omitNothingFields = False} ''ConnSourceConfig)
instance J.ToJSON (Pool Connection) where
toJSON = const (J.String "_REDACTED_")
instance Eq (Pool Connection) where
_ == _ = True
deriving instance Eq SourceConfig
deriving instance Generic SourceConfig
deriving instance J.ToJSON SourceConfig

View File

@ -1,415 +0,0 @@
{-# OPTIONS -fno-warn-orphans #-}
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE OverloadedLists #-}
module Hasura.Backends.MySQL.Types.Internal
( Aliased (..),
ConnSourceConfig (..),
SourceConfig (..),
Column (..),
JoinType (..),
ScalarValue (..),
Expression (..),
Top (..),
Op (..),
ConnPoolSettings (..),
FieldName (..),
FieldOrigin (..),
EntityAlias (..),
Countable (..),
Aggregate (..),
Projection (..),
TableName (..),
From (..),
Reselect (..),
JoinAlias (..),
Join (..),
Where (..),
Order (..),
NullsOrder (..),
ScalarType,
OrderBy (..),
Select (..),
defaultConnPoolSettings,
ConstraintName (..),
FunctionName (..),
parseMySQLScalarType,
parseScalarValue,
mkMySQLScalarTypeName,
)
where
import Autodocodec (HasCodec (codec), dimapCodec)
import Autodocodec qualified as AC
import Data.Aeson qualified as J
import Data.ByteString
import Data.Data
import Data.HashSet.InsOrd (InsOrdHashSet)
import Data.Hashable
import Data.Int
import Data.Pool
import Data.Set
import Data.Text qualified as T
import Data.Text.Encoding (decodeUtf8With, encodeUtf8)
import Data.Text.Encoding.Error (lenientDecode)
import Data.Text.Extended (ToTxt (..))
import Data.Word (Word16)
import Database.MySQL.Base (Connection)
import Database.MySQL.Base.Types qualified as MySQLTypes (Type (..))
import Hasura.Base.Error
import Hasura.Base.ErrorValue qualified as ErrorValue
import Hasura.Base.ToErrorValue
import Hasura.Prelude
import Language.GraphQL.Draft.Syntax qualified as G
data Aliased a = Aliased
{ aliasedThing :: a,
aliasedAlias :: Text
}
-- | Partial of Database.MySQL.Simple.ConnectInfo
data ConnSourceConfig = ConnSourceConfig
{ -- | Works with @127.0.0.1@ but not with @localhost@: https://mariadb.com/kb/en/troubleshooting-connection-issues/#localhost-and
_cscHost :: Text,
_cscPort :: Word16,
_cscUser :: Text,
_cscPassword :: Text,
_cscDatabase :: Text,
_cscPoolSettings :: ConnPoolSettings
}
deriving (Eq, Show, NFData, Generic, Hashable)
data SourceConfig = SourceConfig
{ scConfig :: ConnSourceConfig,
scConnectionPool :: Pool Connection
}
instance Show SourceConfig where
show _ = "(SourceConfig <details>)"
newtype ConstraintName = ConstraintName {unConstraintName :: Text}
deriving newtype (Show, Eq, ToTxt, J.FromJSON, J.ToJSON, Hashable, NFData)
instance ToErrorValue ConstraintName where
toErrorValue = ErrorValue.squote . unConstraintName
newtype FunctionName = FunctionName {unFunctionName :: Text}
deriving newtype (Show, Eq, Ord, ToTxt, J.FromJSONKey, J.ToJSONKey, J.FromJSON, J.ToJSON, Hashable, NFData)
instance HasCodec FunctionName where
codec = dimapCodec FunctionName unFunctionName codec
instance ToErrorValue FunctionName where
toErrorValue = ErrorValue.squote . unFunctionName
newtype Column = Column {unColumn :: Text}
deriving newtype (Show, Eq, Ord, ToTxt, J.FromJSONKey, J.ToJSONKey, J.FromJSON, J.ToJSON, Hashable, NFData)
deriving (Generic)
instance ToErrorValue Column where
toErrorValue = ErrorValue.squote . unColumn
data ScalarValue
= BigValue Int32 -- Not Int64 due to scalar-representation
| BinaryValue ByteString
| BitValue Bool
| BlobValue ByteString
| CharValue Text
| DatetimeValue Text
| DateValue Text
| DecimalValue Double -- Not Decimal due to scalar-representation
| DoubleValue Double
| EnumValue Text
| FloatValue Double -- Not Float due to scalar-representation
| GeometrycollectionValue Text -- TODO
| GeometryValue Text -- TODO
| IntValue Int32
| JsonValue J.Value
| LinestringValue Text -- TODO
| MediumValue Int32 -- (actually, 3-bytes)
| MultilinestringValue Text -- TODO
| MultipointValue Text -- TODO
| MultipolygonValue Text -- TODO
| NullValue
| NumericValue Double -- Not Decimal due to scalar-representation -- TODO: Double check
| PointValue Text -- TODO
| PolygonValue Text -- TODO
| SetValue (Set Text)
| SmallValue Int32 -- Not Int16 due to scalar-representation
| TextValue Text
| TimestampValue Text
| TimeValue Text
| TinyValue Int32 -- Not Int8 due to scalar-representation
| UnknownValue Text
| VarbinaryValue ByteString
| VarcharValue Text
| YearValue Text
deriving (Show, Read, Eq, Ord, Generic, J.ToJSON, J.ToJSONKey, J.FromJSON, Data, NFData)
instance Hashable ScalarValue where
hashWithSalt i = hashWithSalt i . tshow
instance ToTxt ScalarValue where
toTxt = tshow
instance J.ToJSON ByteString where
toJSON = J.String . decodeUtf8With lenientDecode
instance J.FromJSON ByteString where
parseJSON = J.withText "ByteString" (pure . encodeUtf8)
data Expression
= ValueExpression ScalarValue
| AndExpression [Expression]
| OrExpression [Expression]
| NotExpression Expression
| ExistsExpression Select
| InExpression Expression [Expression]
| OpExpression Op Expression Expression
| ColumnExpression FieldName
| -- expression.text(e1, e2, ..)
MethodExpression Expression Text [Expression]
data Top
= NoTop
| Top Int
data Op
= LT
| LTE
| GT
| GTE
| IN
| LIKE
| NLIKE
| NIN
| EQ'
| NEQ'
data ConnPoolSettings = ConnPoolSettings
{ _cscIdleTimeout :: Word,
_cscMaxConnections :: Word
}
deriving (Eq, Show, NFData, Generic, Hashable)
data FieldName = FieldName
{ fName :: Text,
fNameEntity :: Text
}
data FieldOrigin
= NoOrigin
| AggregateOrigin [Aliased Aggregate]
newtype EntityAlias = EntityAlias
{ entityAliasText :: Text
}
data Countable name
= StarCountable
| NonNullFieldCountable (NonEmpty name)
| DistinctCountable (NonEmpty name)
data Aggregate
= CountAggregate (Countable FieldName)
| OpAggregate Text [Expression]
| TextAggregate Text
data Projection
= ExpressionProjection (Aliased Expression)
| FieldNameProjection (Aliased FieldName)
| AggregateProjections (Aliased (NonEmpty (Aliased Aggregate)))
| AggregateProjection (Aliased Aggregate)
| StarProjection
| EntityProjection (Aliased [(FieldName, FieldOrigin)])
| ArrayEntityProjection EntityAlias (Aliased [FieldName])
data TableName = TableName
{ name :: Text,
schema :: Maybe Text
}
instance ToErrorValue TableName where
toErrorValue TableName {name, schema} =
ErrorValue.squote $ maybe name (<> "." <> name) schema
data From
= FromQualifiedTable (Aliased TableName)
| FromSelect (Aliased Select)
data Reselect = Reselect
{ reselectProjections :: [Projection],
reselectWhere :: Where
}
data JoinAlias = JoinAlias
{ joinAliasEntity :: Text,
joinAliasField :: Maybe Text
}
data Join = Join
{ -- | For display/debug purposes.
joinRightTable :: EntityAlias,
-- | Where to pull the data from.
joinSelect :: Select,
-- | Type of join to perform in-Haskell.
joinType :: JoinType,
-- | Wrap the output in this field name.
joinFieldName :: Text,
joinTop :: Top,
joinOffset :: Maybe Int
}
data JoinType
= -- | A join without any 'ON x=y' construct. We're querying from a
-- table and doing our own WHERE clauses.
OnlessJoin
| -- | An array join on the given fields.
ArrayJoin [(FieldName, FieldName)]
| -- | An array aggregate join.
ArrayAggregateJoin [(FieldName, FieldName)]
| -- | Simple object join on the fields.
ObjectJoin [(FieldName, FieldName)]
newtype Where
= Where [Expression]
data Order
= Asc
| Desc
data NullsOrder
= NullsFirst
| NullsLast
| NullsAnyOrder
type ScalarType = MySQLTypes.Type
instance AC.HasCodec ScalarType where
codec =
AC.CommentCodec "A MySQL scalar type." $
AC.stringConstCodec
[ (MySQLTypes.NewDate, "DATE"),
(MySQLTypes.NewDecimal, "DECIMAL"),
(MySQLTypes.Decimal, "DECIMAL"),
(MySQLTypes.Tiny, "TINYINT"),
(MySQLTypes.Short, "SMALLINT"),
(MySQLTypes.Long, "INT"),
(MySQLTypes.Float, "FLOAT"),
(MySQLTypes.Double, "DOUBLE"),
(MySQLTypes.Null, "NULL"),
(MySQLTypes.Timestamp, "TIMESTAMP"),
(MySQLTypes.LongLong, "BIGINT"),
(MySQLTypes.Int24, "MEDIUMINT"),
(MySQLTypes.Date, "DATE"),
(MySQLTypes.Time, "TIME"),
(MySQLTypes.DateTime, "DATETIME"),
(MySQLTypes.Year, "YEAR"),
(MySQLTypes.VarChar, "VARCHAR"),
(MySQLTypes.Bit, "BIT"),
(MySQLTypes.Enum, "ENUM"),
(MySQLTypes.Set, "SET"),
(MySQLTypes.TinyBlob, "TINYBLOB"),
(MySQLTypes.MediumBlob, "MEDIUMBLOB"),
(MySQLTypes.LongBlob, "LONGBLOB"),
(MySQLTypes.Blob, "BLOB"),
(MySQLTypes.VarString, "VARCHAR"),
(MySQLTypes.String, "TEXT"),
(MySQLTypes.Geometry, "GEOMETRY"),
(MySQLTypes.Json, "JSON")
]
data OrderBy = OrderBy
{ orderByFieldName :: FieldName,
orderByOrder :: Order,
orderByNullsOrder :: NullsOrder,
orderByType :: Maybe ScalarType
}
data Select = Select
{ selectProjections :: InsOrdHashSet Projection,
selectFrom :: From,
selectJoins :: [Join],
selectWhere :: Where,
selectOrderBy :: Maybe (NonEmpty OrderBy),
selectSqlOffset :: Maybe Int,
selectSqlTop :: Top,
selectGroupBy :: [FieldName],
selectFinalWantedFields :: Maybe [Text]
}
mkMySQLScalarTypeName :: MonadError QErr m => ScalarType -> m G.Name
mkMySQLScalarTypeName = \case
scalarType ->
G.mkName (scalarTypeDBName scalarType)
`onNothing` throw400
ValidationFailed
( "cannot use SQL type "
<> scalarTypeDBName scalarType
<> " in the GraphQL schema because its name is not a "
<> "valid GraphQL identifier"
)
scalarTypeDBName :: ScalarType -> Text
scalarTypeDBName = error "scalarTypeDBName: not implemented"
defaultConnPoolSettings :: ConnPoolSettings
defaultConnPoolSettings =
ConnPoolSettings
{ _cscIdleTimeout = 5,
_cscMaxConnections = 50
}
-- | ref: https://dev.mysql.com/doc/c-api/8.0/en/c-api-data-structures.html
--
-- DB has CHAR, BINARY, VARCHAR and VARBINARY
-- C API only has STRING and VARSTRING
-- Database.MySQL.Base.Types.Type has String, VarString and VarChar for some reason
parseMySQLScalarType :: Text -> ScalarType
parseMySQLScalarType scalarType =
case (T.toUpper scalarType) of
"BIGINT" -> MySQLTypes.LongLong
"BINARY" -> MySQLTypes.String
"BIT" -> MySQLTypes.Bit
"BLOB" -> MySQLTypes.Blob -- TinyBlob, MediumBlob, LongBlob
"CHAR" -> MySQLTypes.String
"DATE" -> MySQLTypes.Date -- Or NewDate. REVIEW: When to use NewDate :: Database.MySQL.Base.Types.Type then?
"DATETIME" -> MySQLTypes.DateTime
"DECIMAL" -> MySQLTypes.Decimal -- Or NewDecimal
"DOUBLE" -> MySQLTypes.Double
"ENUM" -> MySQLTypes.Enum
"FLOAT" -> MySQLTypes.Float
"GEOMETRYCOLLECTION" -> MySQLTypes.Geometry
"GEOMETRY" -> MySQLTypes.Geometry -- For all Geometry types. TODO: Check how to distinguish between these types when it becomes necessary
"INT" -> MySQLTypes.Long
"JSON" -> MySQLTypes.Json
"LINESTRING" -> MySQLTypes.Geometry -- For now Geometry could be considered as Text
"MEDIUMINT" -> MySQLTypes.Int24
"MULTILINESTRING" -> MySQLTypes.Geometry
"MULTIPOINT" -> MySQLTypes.Geometry
"MULTIPOLYGON" -> MySQLTypes.Geometry
"NULL" -> MySQLTypes.Null -- Not a column type, but we retain it as part of this definition to enumerate all possible types
"NUMERIC" -> MySQLTypes.Decimal -- Or NewDecimal
"POINT" -> MySQLTypes.Geometry
"POLYGON" -> MySQLTypes.Geometry
"SET" -> MySQLTypes.Set
"SMALLINT" -> MySQLTypes.Short
"TEXT" -> MySQLTypes.Blob
"TIME" -> MySQLTypes.Time
"TIMESTAMP" -> MySQLTypes.Timestamp
"TINYINT" -> MySQLTypes.Tiny
"VARBINARY" -> MySQLTypes.VarString
"VARCHAR" -> MySQLTypes.VarChar
"YEAR" -> MySQLTypes.Year
"TINYTEXT" -> MySQLTypes.String
"VARCHAR(45)" -> MySQLTypes.VarChar
"VARCHAR(450)" -> MySQLTypes.VarChar
"INT UNSIGNED" -> MySQLTypes.Long
"BIT(1)" -> MySQLTypes.Bit
-- _ -> MySQLTypes.Null
txt | "INT" `T.isPrefixOf` txt -> MySQLTypes.Long
txt -> error $ "parseMySQLScalartype: " <> show txt
parseScalarValue :: ScalarType -> J.Value -> Either QErr (ScalarValue)
parseScalarValue = error "parseScalarValue is yet to be implemented."

View File

@ -448,32 +448,6 @@ instance BackendEventTrigger 'BigQuery where
fetchEventInvocationLogs _ _ = throw400 NotSupported $ "Event triggers are not supported for BigQuery sources"
fetchEventById _ _ = throw400 NotSupported $ "Event triggers are not supported for BigQuery sources"
instance BackendEventTrigger 'MySQL where
insertManualEvent _ _ _ _ _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
fetchUndeliveredEvents _ _ _ _ _ = throw400 NotSupported "Event triggers are not supported for MySQL sources"
setRetry _ _ _ _ = throw400 NotSupported "Event triggers are not supported for MySQL sources"
recordSuccess _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MySQL sources"
getMaintenanceModeVersion _ = throw400 NotSupported "Event triggers are not supported for MySQL sources"
recordError _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MySQL sources"
recordError' _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MySQL sources"
dropTriggerAndArchiveEvents _ _ _ = throw400 NotSupported "Event triggers are not supported for MySQL sources"
dropDanglingSQLTrigger _ _ _ _ = throw400 NotSupported "Event triggers are not supported for MySQL sources"
redeliverEvent _ _ = throw400 NotSupported "Event triggers are not supported for MySQL sources"
unlockEventsInSource _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MySQL sources"
createTableEventTrigger _ _ _ _ _ _ _ _ = runExceptT $ throw400 NotSupported "Event triggers are not supported for MySQL sources"
createMissingSQLTriggers _ _ _ _ _ _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
checkIfTriggerExists _ _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
addCleanupSchedules _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
deleteAllScheduledCleanups _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
getCleanupEventsForDeletion _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
updateCleanupEventStatusToDead _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
updateCleanupEventStatusToPaused _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
updateCleanupEventStatusToCompleted _ _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
deleteEventTriggerLogs _ _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
fetchEventLogs _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
fetchEventInvocationLogs _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
fetchEventById _ _ = throw400 NotSupported $ "Event triggers are not supported for MySQL sources"
--------------------------------------------------------------------------------
-- TODO(jkachmar): See if there isn't a way to define the function that

View File

@ -5,5 +5,4 @@ module Hasura.GraphQL.Execute.Instances (module B) where
import Hasura.Backends.BigQuery.Instances.Execute as B ()
import Hasura.Backends.DataConnector.Adapter.Execute as B ()
import Hasura.Backends.MSSQL.Instances.Execute as B ()
import Hasura.Backends.MySQL.Instances.Execute as B ()
import Hasura.Backends.Postgres.Instances.Execute as B ()

View File

@ -5,5 +5,4 @@ module Hasura.GraphQL.Schema.Instances (module B) where
import Hasura.Backends.BigQuery.Instances.Schema as B ()
import Hasura.Backends.DataConnector.Adapter.Schema as B ()
import Hasura.Backends.MSSQL.Instances.Schema as B ()
import Hasura.Backends.MySQL.Instances.Schema as B ()
import Hasura.Backends.Postgres.Instances.Schema as B ()

View File

@ -5,5 +5,4 @@ module Hasura.GraphQL.Transport.Instances (module B) where
import Hasura.Backends.BigQuery.Instances.Transport as B ()
import Hasura.Backends.DataConnector.Adapter.Transport as B ()
import Hasura.Backends.MSSQL.Instances.Transport as B ()
import Hasura.Backends.MySQL.Instances.Transport as B ()
import Hasura.Backends.Postgres.Instances.Transport as B ()

View File

@ -54,7 +54,7 @@ TODO: Reference to open issue or rfc?
-- This type class enables storing and managing Hasura metadata in an isolated
-- database which will not interfere with user's database where tables/functions
-- are defined. Hence, it'll enable support for databases of multiple backends
-- like MySQL, MSSQL etc.
-- like MSSQL etc.
--
-- Error-handling is handled explicitly, with every function returning an
-- `Either QErr`. This is inelegant, but is required: we want the caller to

View File

@ -103,9 +103,6 @@ __BigQuery_comparison_exp = [G.name|_BigQuery_comparison_exp|]
__MSSQL_comparison_exp :: G.Name
__MSSQL_comparison_exp = [G.name|_MSSQL_comparison_exp|]
__MySQL_comparison_exp :: G.Name
__MySQL_comparison_exp = [G.name|_MySQL_comparison_exp|]
__cast :: G.Name
__cast = [G.name|_cast|]

View File

@ -21,7 +21,6 @@ data BackendTag (b :: BackendType) where
PostgresCockroachTag :: BackendTag ('Postgres 'Cockroach)
MSSQLTag :: BackendTag 'MSSQL
BigQueryTag :: BackendTag 'BigQuery
MySQLTag :: BackendTag 'MySQL
DataConnectorTag :: BackendTag 'DataConnector
-- Derive GEq and GCompare instances for BackendTag.
@ -49,9 +48,6 @@ instance HasTag 'MSSQL where
instance HasTag 'BigQuery where
backendTag = BigQueryTag
instance HasTag 'MySQL where
backendTag = MySQLTag
instance HasTag 'DataConnector where
backendTag = DataConnectorTag
@ -62,7 +58,6 @@ reify PostgresCitusTag = Postgres Citus
reify PostgresCockroachTag = Postgres Cockroach
reify MSSQLTag = MSSQL
reify BigQueryTag = BigQuery
reify MySQLTag = MySQL
reify DataConnectorTag = DataConnector
-- | Provides a title-cased name for a database kind, inferring the appropriate

View File

@ -39,7 +39,6 @@ data BackendType
= Postgres PostgresKind
| MSSQL
| BigQuery
| MySQL
| DataConnector
deriving stock (Show, Eq, Ord, Generic)
deriving anyclass (Hashable)
@ -51,7 +50,6 @@ instance Witch.From BackendType NonEmptyText where
from (Postgres Cockroach) = [nonEmptyTextQQ|cockroach|]
from MSSQL = [nonEmptyTextQQ|mssql|]
from BigQuery = [nonEmptyTextQQ|bigquery|]
from MySQL = [nonEmptyTextQQ|mysql|]
from DataConnector = [nonEmptyTextQQ|dataconnector|]
instance ToTxt BackendType where
@ -78,7 +76,6 @@ data BackendSourceKind (b :: BackendType) where
PostgresCockroachKind :: BackendSourceKind ('Postgres 'Cockroach)
MSSQLKind :: BackendSourceKind 'MSSQL
BigQueryKind :: BackendSourceKind 'BigQuery
MySQLKind :: BackendSourceKind 'MySQL
DataConnectorKind :: DataConnectorName -> BackendSourceKind 'DataConnector
deriving instance Show (BackendSourceKind b)
@ -95,7 +92,6 @@ instance Witch.From (BackendSourceKind b) NonEmptyText where
from k@PostgresCockroachKind = Witch.into @NonEmptyText $ backendTypeFromBackendSourceKind k
from k@MSSQLKind = Witch.into @NonEmptyText $ backendTypeFromBackendSourceKind k
from k@BigQueryKind = Witch.into @NonEmptyText $ backendTypeFromBackendSourceKind k
from k@MySQLKind = Witch.into @NonEmptyText $ backendTypeFromBackendSourceKind k
from (DataConnectorKind dataConnectorName) = Witch.into @NonEmptyText dataConnectorName
instance ToTxt (BackendSourceKind b) where
@ -123,9 +119,6 @@ instance FromJSON (BackendSourceKind ('MSSQL)) where
instance FromJSON (BackendSourceKind ('BigQuery)) where
parseJSON = mkParseStaticBackendSourceKind BigQueryKind
instance FromJSON (BackendSourceKind ('MySQL)) where
parseJSON = mkParseStaticBackendSourceKind MySQLKind
instance FromJSON (BackendSourceKind ('DataConnector)) where
parseJSON v = DataConnectorKind <$> parseJSON v
@ -153,9 +146,6 @@ instance HasCodec (BackendSourceKind ('MSSQL)) where
instance HasCodec (BackendSourceKind ('BigQuery)) where
codec = mkCodecStaticBackendSourceKind BigQueryKind
instance HasCodec (BackendSourceKind ('MySQL)) where
codec = mkCodecStaticBackendSourceKind MySQLKind
instance HasCodec (BackendSourceKind ('DataConnector)) where
codec = bimapCodec dec enc gqlNameCodec
where
@ -205,7 +195,6 @@ supportedBackends =
Postgres Cockroach,
MSSQL,
BigQuery,
MySQL,
DataConnector
]
@ -239,5 +228,4 @@ backendTypeFromBackendSourceKind = \case
PostgresCockroachKind -> Postgres Cockroach
MSSQLKind -> MSSQL
BigQueryKind -> BigQuery
MySQLKind -> MySQL
DataConnectorKind _ -> DataConnector

View File

@ -5,5 +5,4 @@ module Hasura.RQL.Types.Instances (module B) where
import Hasura.Backends.BigQuery.Instances.Types as B ()
import Hasura.Backends.DataConnector.Adapter.Backend as B ()
import Hasura.Backends.MSSQL.Instances.Types as B ()
import Hasura.Backends.MySQL.Instances.Types as B ()
import Hasura.Backends.Postgres.Instances.Types as B ()

View File

@ -5,5 +5,4 @@ module Hasura.RQL.Types.Metadata.Instances (module B) where
import Hasura.Backends.BigQuery.Instances.Metadata as B ()
import Hasura.Backends.DataConnector.Adapter.Metadata as B ()
import Hasura.Backends.MSSQL.Instances.Metadata as B ()
import Hasura.Backends.MySQL.Instances.Metadata as B ()
import Hasura.Backends.Postgres.Instances.Metadata as B ()

View File

@ -7,5 +7,4 @@ module Hasura.RQL.Types.SchemaCache.Instances (module I) where
import Hasura.Backends.BigQuery.Instances.SchemaCache as I ()
import Hasura.Backends.DataConnector.Adapter.SchemaCache as I ()
import Hasura.Backends.MSSQL.Instances.SchemaCache as I ()
import Hasura.Backends.MySQL.Instances.SchemaCache as I ()
import Hasura.Backends.Postgres.Instances.SchemaCache as I ()

View File

@ -145,7 +145,6 @@ data AnyBackend (i :: BackendType -> Type)
| PostgresCockroachValue (i ('Postgres 'Cockroach))
| MSSQLValue (i 'MSSQL)
| BigQueryValue (i 'BigQuery)
| MySQLValue (i 'MySQL)
| DataConnectorValue (i 'DataConnector)
deriving (Generic)
@ -156,7 +155,6 @@ type AllBackendsSatisfy (c :: BackendType -> Constraint) =
c ('Postgres 'Cockroach),
c 'MSSQL,
c 'BigQuery,
c 'MySQL,
c 'DataConnector
)
@ -169,7 +167,6 @@ type SatisfiesForAllBackends
c (i ('Postgres 'Cockroach)),
c (i 'MSSQL),
c (i 'BigQuery),
c (i 'MySQL),
c (i 'DataConnector)
)
@ -184,7 +181,6 @@ liftTag (Postgres Citus) = PostgresCitusValue PostgresCitusTag
liftTag (Postgres Cockroach) = PostgresCockroachValue PostgresCockroachTag
liftTag MSSQL = MSSQLValue MSSQLTag
liftTag BigQuery = BigQueryValue BigQueryTag
liftTag MySQL = MySQLValue MySQLTag
liftTag DataConnector = DataConnectorValue DataConnectorTag
-- | Obtain a @BackendType@ from a runtime value.
@ -194,7 +190,6 @@ lowerTag (PostgresCitusValue _) = Postgres Citus
lowerTag (PostgresCockroachValue _) = Postgres Cockroach
lowerTag (MSSQLValue _) = MSSQL
lowerTag (BigQueryValue _) = BigQuery
lowerTag (MySQLValue _) = MySQL
lowerTag (DataConnectorValue _) = DataConnector
-- | Transforms an @AnyBackend i@ into an @AnyBackend j@.
@ -211,7 +206,6 @@ mapBackend e f = case e of
PostgresCockroachValue x -> PostgresCockroachValue (f x)
MSSQLValue x -> MSSQLValue (f x)
BigQueryValue x -> BigQueryValue (f x)
MySQLValue x -> MySQLValue (f x)
DataConnectorValue x -> DataConnectorValue (f x)
-- | Traverse an @AnyBackend i@ into an @f (AnyBackend j)@.
@ -231,7 +225,6 @@ traverseBackend e f = case e of
PostgresCockroachValue x -> PostgresCockroachValue <$> f x
MSSQLValue x -> MSSQLValue <$> f x
BigQueryValue x -> BigQueryValue <$> f x
MySQLValue x -> MySQLValue <$> f x
DataConnectorValue x -> DataConnectorValue <$> f x
-- | Creates a new @AnyBackend i@ for a given backend @b@ by wrapping the given @i b@.
@ -248,7 +241,6 @@ mkAnyBackend x = case backendTag @b of
PostgresCockroachTag -> PostgresCockroachValue x
MSSQLTag -> MSSQLValue x
BigQueryTag -> BigQueryValue x
MySQLTag -> MySQLValue x
DataConnectorTag -> DataConnectorValue x
-- | Dispatch a function to the value inside the @AnyBackend@, that does not
@ -266,7 +258,6 @@ runBackend b f = case b of
PostgresCockroachValue x -> f x
MSSQLValue x -> f x
BigQueryValue x -> f x
MySQLValue x -> f x
DataConnectorValue x -> f x
-- | Dispatch an existential using an universally quantified function while
@ -288,7 +279,6 @@ dispatchAnyBackend e f = case e of
PostgresCockroachValue x -> f x
MSSQLValue x -> f x
BigQueryValue x -> f x
MySQLValue x -> f x
DataConnectorValue x -> f x
dispatchAnyBackendWithTwoConstraints ::
@ -308,7 +298,6 @@ dispatchAnyBackendWithTwoConstraints e f = case e of
PostgresCockroachValue x -> f x
MSSQLValue x -> f x
BigQueryValue x -> f x
MySQLValue x -> f x
DataConnectorValue x -> f x
-- | Unlike 'dispatchAnyBackend', the expected constraint has a different kind.
@ -328,7 +317,6 @@ dispatchAnyBackend' e f = case e of
PostgresCockroachValue x -> f x
MSSQLValue x -> f x
BigQueryValue x -> f x
MySQLValue x -> f x
DataConnectorValue x -> f x
-- | This allows you to apply a constraint to the Backend instances (c2)
@ -350,7 +338,6 @@ dispatchAnyBackend'' e f = case e of
PostgresCockroachValue x -> f x
MSSQLValue x -> f x
BigQueryValue x -> f x
MySQLValue x -> f x
DataConnectorValue x -> f x
-- | Sometimes we need to run operations on two backends of the same type.
@ -373,7 +360,6 @@ composeAnyBackend f e1 e2 owise = case (e1, e2) of
(PostgresCockroachValue x, PostgresCockroachValue y) -> f x y
(MSSQLValue x, MSSQLValue y) -> f x y
(BigQueryValue x, BigQueryValue y) -> f x y
(MySQLValue x, MySQLValue y) -> f x y
(DataConnectorValue x, DataConnectorValue y) -> f x y
(value1, value2) ->
if mapBackend value1 (Const . const ()) == mapBackend value2 (Const . const ())
@ -397,7 +383,6 @@ mergeAnyBackend f e1 e2 owise = case (e1, e2) of
(PostgresCockroachValue x, PostgresCockroachValue y) -> PostgresCockroachValue (f x y)
(MSSQLValue x, MSSQLValue y) -> MSSQLValue (f x y)
(BigQueryValue x, BigQueryValue y) -> BigQueryValue (f x y)
(MySQLValue x, MySQLValue y) -> MySQLValue (f x y)
(DataConnectorValue x, DataConnectorValue y) -> DataConnectorValue (f x y)
(value1, value2) ->
if mapBackend value1 (Const . const ()) == mapBackend value2 (Const . const ())
@ -419,7 +404,6 @@ unpackAnyBackend exists = case (backendTag @b, exists) of
(PostgresCockroachTag, PostgresCockroachValue x) -> Just x
(MSSQLTag, MSSQLValue x) -> Just x
(BigQueryTag, BigQueryValue x) -> Just x
(MySQLTag, MySQLValue x) -> Just x
(DataConnectorTag, DataConnectorValue x) -> Just x
(tag, value) ->
if mapBackend (mkAnyBackend tag) (Const . const ()) == mapBackend value (Const . const ())
@ -461,8 +445,6 @@ dispatchAnyBackendArrow arrow = proc (ab, x) -> do
arrow @'MSSQL -< (val, x)
BigQueryValue val ->
arrow @'BigQuery -< (val, x)
MySQLValue val ->
arrow @'MySQL -< (val, x)
DataConnectorValue val ->
arrow @'DataConnector -< (val, x)
@ -483,7 +465,6 @@ parseAnyBackendFromJSON backendKind value = case backendKind of
Postgres Cockroach -> PostgresCockroachValue <$> parseJSON value
MSSQL -> MSSQLValue <$> parseJSON value
BigQuery -> BigQueryValue <$> parseJSON value
MySQL -> MySQLValue <$> parseJSON value
DataConnector -> DataConnectorValue <$> parseJSON value
-- | Codec that can be used to decode and encode @AnyBackend i@ values. Throws
@ -500,7 +481,6 @@ anyBackendCodec backendKind = case backendKind of
Postgres Cockroach -> dimapCodec PostgresCockroachValue (\case (PostgresCockroachValue v) -> v; _ -> error msg) $ codec @(i ('Postgres 'Cockroach))
MSSQL -> dimapCodec MSSQLValue (\case (MSSQLValue v) -> v; _ -> error msg) $ codec @(i 'MSSQL)
BigQuery -> dimapCodec BigQueryValue (\case (BigQueryValue v) -> v; _ -> error msg) $ codec @(i 'BigQuery)
MySQL -> dimapCodec MySQLValue (\case (MySQLValue v) -> v; _ -> error msg) $ codec @(i 'MySQL)
DataConnector -> dimapCodec DataConnectorValue (\case (DataConnectorValue v) -> v; _ -> error msg) $ codec @(i 'DataConnector)
where
msg = "got unexpected backend type indicating anyBackendCodec was called with the wrong backendType value"
@ -538,7 +518,6 @@ backendSourceKindFromText text =
<|> PostgresCockroachValue <$> staticKindFromText PostgresCockroachKind
<|> MSSQLValue <$> staticKindFromText MSSQLKind
<|> BigQueryValue <$> staticKindFromText BigQueryKind
<|> MySQLValue <$> staticKindFromText MySQLKind
-- IMPORTANT: This must be the last thing here, since it will accept (almost) any string
<|> DataConnectorValue . DataConnectorKind <$> (preview _Right . mkDataConnectorName =<< GQL.mkName text)
where
@ -555,6 +534,5 @@ parseBackendSourceKindFromJSON value =
<|> PostgresCockroachValue <$> parseJSON @(BackendSourceKind ('Postgres 'Cockroach)) value
<|> MSSQLValue <$> parseJSON @(BackendSourceKind ('MSSQL)) value
<|> BigQueryValue <$> parseJSON @(BackendSourceKind ('BigQuery)) value
<|> MySQLValue <$> parseJSON @(BackendSourceKind ('MySQL)) value
-- IMPORTANT: This must the last thing here, since it will accept (almost) any string
<|> DataConnectorValue <$> parseJSON @(BackendSourceKind ('DataConnector)) value

View File

@ -5,5 +5,4 @@ module Hasura.Server.API.Instances (module B) where
import Hasura.Backends.BigQuery.Instances.API as B ()
import Hasura.Backends.DataConnector.Adapter.API as B ()
import Hasura.Backends.MSSQL.Instances.API as B ()
import Hasura.Backends.MySQL.Instances.API as B ()
import Hasura.Backends.Postgres.Instances.API as B ()

View File

@ -20,7 +20,6 @@ import Hasura.Backends.BigQuery.DDL.RunSQL qualified as BigQuery
import Hasura.Backends.DataConnector.Adapter.RunSQL qualified as DataConnector
import Hasura.Backends.DataConnector.Adapter.Types (DataConnectorName, mkDataConnectorName)
import Hasura.Backends.MSSQL.DDL.RunSQL qualified as MSSQL
import Hasura.Backends.MySQL.SQL qualified as MySQL
import Hasura.Backends.Postgres.DDL.RunSQL qualified as Postgres
import Hasura.Base.Error
import Hasura.EncJSON
@ -63,7 +62,6 @@ data RQLQuery
| RQMssqlRunSql !MSSQL.MSSQLRunSQL
| RQCitusRunSql !Postgres.RunSQL
| RQCockroachRunSql !Postgres.RunSQL
| RQMysqlRunSql !MySQL.RunSQL
| RQBigqueryRunSql !BigQuery.BigQueryRunSQL
| RQDataConnectorRunSql !DataConnectorName !DataConnector.DataConnectorRunSQL
| RQBigqueryDatabaseInspection !BigQuery.BigQueryRunSQL
@ -95,7 +93,6 @@ instance FromJSON RQLQuery where
"mssql_run_sql" -> RQMssqlRunSql <$> args
"citus_run_sql" -> RQCitusRunSql <$> args
"cockroach_run_sql" -> RQCockroachRunSql <$> args
"mysql_run_sql" -> RQMysqlRunSql <$> args
"bigquery_run_sql" -> RQBigqueryRunSql <$> args
(dcNameFromRunSql -> Just t') -> RQDataConnectorRunSql t' <$> args
"bigquery_database_inspection" -> RQBigqueryDatabaseInspection <$> args
@ -159,7 +156,6 @@ queryModifiesSchema = \case
RQCitusRunSql q -> Postgres.isSchemaCacheBuildRequiredRunSQL q
RQCockroachRunSql q -> Postgres.isSchemaCacheBuildRequiredRunSQL q
RQMssqlRunSql q -> MSSQL.isSchemaCacheBuildRequiredRunSQL q
RQMysqlRunSql _ -> False
RQBigqueryRunSql _ -> False
RQDataConnectorRunSql _ _ -> False
RQBigqueryDatabaseInspection _ -> False
@ -187,7 +183,6 @@ runQueryM sqlGen rq = Tracing.newSpan (T.pack $ constrName rq) $ case rq of
RQCount q -> runCount q
RQRunSql q -> Postgres.runRunSQL @'Vanilla sqlGen q
RQMssqlRunSql q -> MSSQL.runSQL q
RQMysqlRunSql q -> MySQL.runSQL q
RQCitusRunSql q -> Postgres.runRunSQL @'Citus sqlGen q
RQCockroachRunSql q -> Postgres.runRunSQL @'Cockroach sqlGen q
RQBigqueryRunSql q -> BigQuery.runSQL q
@ -210,7 +205,6 @@ queryModifiesUserDB = \case
RQCitusRunSql runsql -> not (Postgres.isReadOnly runsql)
RQCockroachRunSql runsql -> not (Postgres.isReadOnly runsql)
RQMssqlRunSql _ -> True
RQMysqlRunSql _ -> True
RQBigqueryRunSql _ -> True
RQDataConnectorRunSql _ _ -> True
RQBigqueryDatabaseInspection _ -> False

View File

@ -1,15 +0,0 @@
-- This could also be done with making MySQL itself return a non-redundant structure
-- in JSON to simplify the processing within the engine.
SELECT c.TABLE_SCHEMA, c.TABLE_NAME, c.COLUMN_NAME, c.ORDINAL_POSITION, c.COLUMN_DEFAULT,
c.IS_NULLABLE, c.DATA_TYPE, c.COLUMN_TYPE, c.COLUMN_KEY, c.COLUMN_COMMENT,
k.CONSTRAINT_NAME, k.ORDINAL_POSITION, k.POSITION_IN_UNIQUE_CONSTRAINT,
k.REFERENCED_TABLE_SCHEMA, k.REFERENCED_TABLE_NAME, k.REFERENCED_COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS c
LEFT OUTER JOIN
INFORMATION_SCHEMA.KEY_COLUMN_USAGE k
ON c.TABLE_NAME = k.TABLE_NAME AND
c.TABLE_SCHEMA = k.TABLE_SCHEMA AND
c.COLUMN_NAME = k.COLUMN_NAME
WHERE c.TABLE_SCHEMA = ?
ORDER BY c.TABLE_NAME ASC, c.ORDINAL_POSITION ASC
;

View File

@ -1,79 +0,0 @@
module Hasura.Backends.MySQL.DataLoader.ExecuteTests
( spec,
)
where
import Data.HashMap.Strict.InsOrd qualified as InsOrdHashMap
import Data.Vector qualified as V
import Hasura.Backends.MySQL.DataLoader.Execute
import Hasura.Prelude
import Hedgehog
import Hedgehog.Gen
import Hedgehog.Range
import Test.Hspec
import Test.Hspec.Hedgehog
spec :: Spec
spec = do
describe "joinObjectRows" $ do
joinObjectRowsThrowsIfRightRowsIsEmpty
joinObjectRowsThrowsIfRightRowsIsLargerThanOne
describe "leftObjectJoin" $ do
leftObjectJoinThrowsIfRightRowsIsEmpty
leftObjectJoinThrowsIfRightRowsIsLargerThanOne
joinObjectRowsThrowsIfRightRowsIsEmpty :: Spec
joinObjectRowsThrowsIfRightRowsIsEmpty =
it "throws if rightRows is empty" $
joinObjectRows
Nothing
""
InsOrdHashMap.empty
empty
`shouldSatisfy` invariant
joinObjectRowsThrowsIfRightRowsIsLargerThanOne :: Spec
joinObjectRowsThrowsIfRightRowsIsLargerThanOne = do
it "throws if rightRows is two or more"
. hedgehog
$ do
size <- forAll $ integral (linear 2 100)
let result =
joinObjectRows
Nothing
""
InsOrdHashMap.empty
(V.replicate size InsOrdHashMap.empty)
assert $ invariant result
leftObjectJoinThrowsIfRightRowsIsEmpty :: Spec
leftObjectJoinThrowsIfRightRowsIsEmpty =
it "throws if rightRows is empty" $
leftObjectJoin
Nothing
""
[]
(RecordSet Nothing (V.singleton InsOrdHashMap.empty) Nothing)
(RecordSet Nothing mempty Nothing)
`shouldSatisfy` invariant
leftObjectJoinThrowsIfRightRowsIsLargerThanOne :: Spec
leftObjectJoinThrowsIfRightRowsIsLargerThanOne =
it "throws if rightRows is two or more"
. hedgehog
$ do
size <- forAll $ integral (linear 2 100)
let result =
leftObjectJoin
Nothing
""
[]
(RecordSet Nothing (V.singleton InsOrdHashMap.empty) Nothing)
(RecordSet Nothing (V.replicate size InsOrdHashMap.empty) Nothing)
assert $ invariant result
invariant :: Either ExecuteProblem a -> Bool
invariant =
\case
Left (BrokenJoinInvariant _) -> True
_ -> False

View File

@ -1,30 +0,0 @@
{-# LANGUAGE QuasiQuotes #-}
module Hasura.Backends.MySQL.TypesSpec (spec) where
import Data.Aeson (parseJSON, toJSON)
import Data.Aeson.QQ (aesonQQ)
import Data.Aeson.Types (parseEither)
import Hasura.Backends.MySQL.Types (ConnPoolSettings (..))
import Hasura.Prelude
import Test.Hspec
spec :: Spec
spec = do
describe "MySQL" do
describe "ConnPoolSettings" do
it "should parse idle timeout and max connections" do
let input =
[aesonQQ|
{ "idle_timeout": 100,
"max_connections": 10
}
|]
let decoded = parseEither parseJSON input
let expected = ConnPoolSettings {_cscIdleTimeout = 100, _cscMaxConnections = 10}
decoded `shouldBe` Right expected
it "should round-trip" do
let expected = ConnPoolSettings {_cscIdleTimeout = 100, _cscMaxConnections = 10}
let actual = parseEither parseJSON $ toJSON expected
actual `shouldBe` Right expected

View File

@ -22,7 +22,7 @@ import Test.Hspec.Expectations.Json (shouldBeJson)
spec :: Spec
spec = describe "BackendMap" do
it "serializes via Autodocodec" do
let mysqlConfig = BM.singleton @'MySQL @BackendConfigWrapper (BackendConfigWrapper ())
let mssqlConfig = BM.singleton @'MSSQL @BackendConfigWrapper (BackendConfigWrapper ())
let dataconnectorConfig =
BM.singleton @'DataConnector @BackendConfigWrapper
( BackendConfigWrapper $
@ -34,12 +34,12 @@ spec = describe "BackendMap" do
}
)
)
let configs = mysqlConfig <> dataconnectorConfig
let configs = mssqlConfig <> dataconnectorConfig
let expected =
[aesonQQ|
{
"mysql": [],
"mssql": [],
"dataconnector": {
"MyConnector": {
"uri": "https://somehost.org",