Initial commit for hub information

PR-URL: https://github.com/hasura/graphql-engine-mono/pull/9631
GitOrigin-RevId: 64c40a30e3483fb0322ce9f2fbb2e9c1e3f1d9ba
This commit is contained in:
Brandon Martin 2023-06-21 14:22:47 -06:00 committed by hasura-bot
parent bf99249bbc
commit 26b38bbe5b
10 changed files with 2815 additions and 2626 deletions

2641
dc-agents/DOCUMENTATION.md Normal file

File diff suppressed because it is too large Load Diff

46
dc-agents/HUB.md Normal file
View File

@ -0,0 +1,46 @@
# Agents
## [Reference agent](https://github.com/hasura/graphql-engine/tree/master/dc-agents/reference)
The reference agent is a good starting point if you want to build your own connector or consider using the SQLite agent.
## OLTP
[OLTP guide](guides/OLTP.md)
- SQLite
- [Docs](https://github.com/hasura/graphql-engine/tree/master/dc-agents/sqlite#data-connector-agent-for-sqlite)
- [Repo](https://github.com/hasura/graphql-engine/tree/master/dc-agents/sqlite)
- PostgreSQL
- [Docs](https://hasura.io/docs/latest/databases/postgres/index/)
- [Repo](https://github.com/hasura/graphql-engine/tree/master/server)
- [MariaDB](https://hasura.io/docs/latest/databases/mariadb/index/)
- [MySQL](https://hasura.io/docs/latest/databases/mysql/index/)
- [Oracle](https://hasura.io/docs/latest/databases/oracle/index/)
- [SQL Server](https://hasura.io/docs/latest/databases/ms-sql-server/index/)
## OLAP
[OLAP guide](guides/OLAP.md)
- [Athena](https://hasura.io/docs/latest/databases/athena/index/)
- [BigQuery](https://hasura.io/docs/latest/databases/bigquery/index/)
- [Snowflake](https://hasura.io/docs/latest/databases/snowflake/index/)
- Clickhouse (Coming soon...)
## Vector
- Weaviate (Coming soon...)
## NoSQL
[NoSQL guide](guides/NoSQL.md)
- [Mongo](https://hasura.io/docs/latest/databases/mongodb/index/)
## API's
- Github (Coming soon...)
- Prometheus (Coming soon...)
- Salesforce (Coming soon...)
- Zendesk (Coming soon...)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,11 @@
# Cassandra
1. RDBMS with extremely high write throughput
2. Query first data modeling - with denormalized and duplicate data to optimize for reads
3. Data for particular partition is always stored within a partition
- All reads and writes must have the partition key
Hasura + Cassandra
1. Parameterized models for reads to ensure partition key is always given
2. Simple writes, but writes are probably done by a backend system

10
dc-agents/guides/Kafka.md Normal file
View File

@ -0,0 +1,10 @@
# Hasura + Kafka
Kafka:
1. Extremely reliable and scalable ingestion of events
Hasura + Kafka:
1. Increases number of concurrent subscribers to Kafka by increasing fan-out
2. Allows validated mutations to Kafka

17
dc-agents/guides/NoSQL.md Normal file
View File

@ -0,0 +1,17 @@
# Hasura + NoSQL (JSON)
Mongo is awesome because:
1. It is super flexible and developers decide what to do as their requirements change. Eg: All types of read models with varying degrees of normalization. Writes can have different types of guarantees of consistency that allow developers to control how fast or slow writes are.
2. Reads & writes are super fast, easy to vertically and horizontally scale, easy to shard and geo-distribute.
What Hasura needs to make Hasura + Mongo awesome:
1. Support modeling and querying nested (embedded) documents well since that is the dominant pattern
2. Support relationships on complex LHS arguments (nested objects/arrays) with native & awesome mongo ways:
- `$lookup`, `$unwind`: similar to joins
- `$graphLookup`: recursive join! great if we can bring that value to the fore :)
3. Support aggregation pipelines in parameterized models
4. Other R&D ideas that need significant guidance from mongo users/customers:
- Help data/schema migration on mongo
- Support denormalized writes?

38
dc-agents/guides/OLAP.md Normal file
View File

@ -0,0 +1,38 @@
# Hasura + SQL (OLAP)
What do people like doing with SQL OLAP DBs?
- Use SQL to extract business insights from their data
- SQL syntax and power is very important
- Since data is not frequently written and comes from another source consistency is assumed once data is in the OLAP store
- Physical data modeling, normalization is not important
- Joins are not important, data is often denormalized
The Hasura + X story:
- Hasura makes OLAP stores high-concurrency and low-latency
- Query batching
- Caching
- Hasura brings end-user authz to aggregations API (on single tables)
- Hasura brings out the best of OLAP features using PMs
**What Hasura needs to have to make Hasura + OLAP CDWs awesome:**
1. All single model features
2. Aggregations roadmap:
- Group By
- CTEs, Pivot tables
3. Parameterized models
4. High-concurrency by batching queries per second
5. Low-latency by allowing pre-warm caches
6. Solutions guide to ETL data to a row-storage engine instead of columnar
7. Async queries for slow running queries
**Clickhouse specifics over and above:**
1. Support dictionaries instead of joins
2. Support PKs and skip indices
**Snowflake specifics over and above:**
1. Support time based snapshots in parameterized models

13
dc-agents/guides/OLTP.md Normal file
View File

@ -0,0 +1,13 @@
# Hasura + SQL (OLTP)
What do people like doing with SQL DBs?
- Normalize their data so that its easy to ensure consistency
- Read short amounts of related data quickly by specifying filter/sort predicates
- Write data in transactions to ensure consistency
The Hasura + X story:
- Tables, views and functions that return relations become models
- Models
- A transaction becomes a command that takes an input model, whose body is the invocation of the transaction, and the output relation is the output model

View File

@ -0,0 +1,6 @@
- [OLTP](OLTP.md)
- [OLAP](OLAP.md)
- [NoSQL](NoSQL.md)
- [Redis](Redis.md)
- [Cassandra](Cassandra.md)
- [Kafka](Kafka.md)

10
dc-agents/guides/Redis.md Normal file
View File

@ -0,0 +1,10 @@
# Hasura + Redis
Redis:
1. Super low-latency reads
Hasura + Redis:
1. Ignore models entirely.
2. Support functions that allow getting values from redis with authz on the input "key" and output "value". Output "values" can have types and will hence show up in the GraphQL schema properly.