refactor community content and folder structure (#1353)

This commit is contained in:
Tirumarai Selvan 2019-01-17 15:57:28 +05:30 committed by Shahidh K Muhammed
parent 7ff1c8829a
commit 9f6ce68e3c
448 changed files with 3163 additions and 21858 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 46 KiB

View File

@ -8,7 +8,7 @@ started with auth webhooks, triggers, etc. and some community tooling around the
Feel free to open pull requests to add more content here.
- [Boilerplates](boilerplates)
- [Examples](examples)
- [Sample Apps](sample-apps)
- [Tools](tools)
## License

View File

@ -1,5 +0,0 @@
node_modules
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*

View File

@ -1,11 +0,0 @@
FROM node:alpine
WORKDIR /server
COPY ./package.json /server/
RUN npm install
COPY . /server/
CMD ["npm", "start"]

View File

@ -1,198 +0,0 @@
# GraphQL Custom Resolver Example
> **NOTE**: now merge [Remote Schemas](../../../remote-schemas.md) from [GraphQL servers](../graphql-servers) using Hasura
> - Boilerplates for custom GraphQL servers have been moved [here](../graphql-servers). Also, a recently released feature removes the need for an external GraphQL gateway by letting you merge remote schemas in GraphQL Engine itself - [read more](../../../remote-schemas.md) (*Please check caveats for current limitations in the feature*).
> - Once schemas have been merged in GraphQL Engine, Hasura proxies requests to remote GraphQL servers.
> - Adding another layer in front of GraphQL Engine impacts performance by as much as **4X**, due the serialization-deserialization overhead. Using an external GraphQL gateway is recommended only if your use case is blocked on any of the current limitations.
## Motivation
Hasura GraphQL Engine provides instant GraphQL APIs over the tables and views of
any Postgres database. It also comes with a fine grained access control layer
that helps you restrict the data that can be consumed.
However, sometimes you might have to write custom resolvers to capture business
logic that is unrelated to the database or needs to execute a custom transaction
or write to the database.
In this example, we illustrate how to write custom resolvers and merge them with
the Hasura GraphQL Engine. We combine Hasura GraphQL Engine's GraphQL API
running at `https://bazookaand.herokuapp.com/v1alpha1/graphql` with the
following custom resolvers:
1. A `hello` query
2. A `count` query (that returns a counter from another data source )
3. A `increment_counter` mutation that increments the value of `count`.
4. A `user_average_age` query that makes directly makes an SQL query to Postgres
using knex.
You can use this as a boilerplate to write custom resolvers with Hasura GraphQL
Engine.
![Custom resolvers with Hasura GraphQL engine](./assets/custom-resolvers-diagram.png)
## Usage
1. Install the required dependencies.
```bash
npm install
```
2. Set appropriate environment variables for the GraphQL Engine URL, the access
key to GraphQL Engine and the Postgres connection string.
```bash
# without the /v1apha1/graphql part
export HASURA_GRAPHQL_ENGINE_URL='https://hge.herokuapp.com'
export X_HASURA_ACCESS_KEY='<access_key>'
# Only required for the direct SQL resolver
export PG_CONNECTION_STRING='<postgres-connection-string>'
```
3. Run the server
```bash
npm start
```
## Deployment
You can deploy this sample boilerplate with:
* Now
* Docker
### Deploy using [Now](https://zeit.co/now)
Run these commands to instantly deploy this boilerplate using Now.
```bash
git clone https://github.com/hasura/graphql-engine
cd community/boilerplates/custom-resolvers
now -e \
HASURA_GRAPHQL_ENGINE_URL='https://hge.herokuapp.com' -e \
X_HASURA_ACCESS_KEY='<access_key>' --npm
```
### Deploy the docker image
This project comes with a [`Dockerfile`](Dockerfile).
## Implementation Details
We will use Apollo's `graphql-tools` library to make a working GraphQL Schema
out of our custom resolvers. Finally, we will merge these resolvers with the
existing Hasura schema so that it can be queried under the same endpoint.
### Writing type definitions
The type definitions are written in standard GraphQL format. We need the
following queries in our custom logic:
```graphql
type Query {
# field hello will return "Hello World" which is a string
hello: String,
# field count will return an Int
count: Int,
# field user_average_age will return a Float
user_average_age: Float
}
type Mutation {
# field "increment_counter" will increment the counter and return type IncrementCounter
increment_counter: IncrementCounter,
# IncrementCounter simply returns the new value of the counter
new_count: Int
}
```
### Writing resolvers
Every resolver is a function that is executed with the following arguments in
the order below:
1. `root`: The root of the current field
2. `args`: The arguments provided in the query
3. `context`: The server context, which also consists of headers
4. `info`: The AST document related to the query made
The resolvers in our case are:
```js
const resolvers = {
// resolvers for queries
Query: {
hello: (root, args, context, info) => {
// return response
return 'Hello world!';
},
count: (root, args, context, info) => {
// return response
return count;
},
user_average_age: async (root, args, context, info) => {
// make SQL query using knex client
const response = await knexClient('user')
.avg('age');
// return response
return response[0].avg;
}
},
// resolvers for mutations
Mutation: {
increment_counter: (root, args, context, info) => {
// return response
return { new_count: ++count };
}
}
};
```
### Making a new schema out of these custom resolvers
Use `makeExecutableSchema()` function from the `graphql-tools` library to make a
schema out of the type definitions and resolvers above.
```js
import { makeExecutableSchema } from 'graphql-tools';
const executableCustomSchema = makeExecutableSchema({
typeDefs,
resolvers,
});
```
### Merging with existing Hasura schema and serving it
Merge these custom resolvers with the Hasura GraphQL Engine by using the
`mergeSchemas()` function from the `graphql-tools` library.
```js
import { mergeSchemas } from 'graphql-tools';
const newSchema = mergeSchemas({
schemas: [
executableCustomSchema,
executableHasuraSchema
]
});
const server = new ApolloServer({
schema: newSchema
});
server.listen().then(({ url }) => {
console.log(`Server running at ${url}`);
});
```
Check [this file](src/index.js) to see how it is done.

View File

@ -1,12 +0,0 @@
{
"name": "Custom Resolvers boilerplate",
"description": "Custom resolvers boilerplate for Hasura GraphQL Engine",
"logo": "https://storage.googleapis.com/hasura-graphql-engine/console/assets/favicon.png",
"keywords": [
"graphql",
"heroku",
"postgres",
"hasura"
],
"repository": "https://github.com/hasura/graphql-engine/community/boilerplates/custom-resolvers"
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

File diff suppressed because it is too large Load Diff

View File

@ -1,25 +0,0 @@
{
"license": "MIT",
"repository": {
"type": "git",
"url": "https://github.com/hasura/graphql-engine/community/boilerplates/custom-resolvers"
},
"scripts": {
"start": "node -r esm src/index.js"
},
"dependencies": {
"apollo-link-http": "^1.5.4",
"apollo-link-ws": "^1.0.8",
"apollo-server": "^2.0.6",
"apollo-server-express": "^1.3.6",
"apollo-utilities": "^1.0.20",
"esm": "^3.0.61",
"graphql": "^0.13.2",
"graphql-tools-with-subscriptions": "^1.0.0",
"knex": "^0.15.2",
"node-fetch": "^2.1.2",
"pg": "^7.4.3",
"subscriptions-transport-ws": "^0.9.14",
"ws": "^6.0.0"
}
}

View File

@ -1,45 +0,0 @@
import knex from 'knex';
import pg from 'pg';
const { PG_CONNECTION_STRING } = process.env;
// create a knex client to connect to directly connect to postgres
pg.defaults.ssl = true;
const knexClient = knex({
client: 'pg',
connection: PG_CONNECTION_STRING
});
let count = 0;
// custom resolvers
const resolvers = {
// resolvers for queries
Query: {
hello: (root, args, context, info) => {
// return response
return 'Hello world!';
},
count: (root, args, context, info) => {
// return response
return count;
},
user_average_age: async (root, args, context, info) => {
// make SQL query using knex client
const response = await knexClient('user')
.avg('age');
// return response
return response[0].avg;
}
},
// resolvers for mutations
Mutation: {
increment_counter: (root, args, context, info) => {
// return response
return { new_count: ++count };
}
}
};
export default resolvers;

View File

@ -1,18 +0,0 @@
// custom type definitions
const typeDefs = `
type Query {
hello: String,
count: Int,
user_average_age: Float
}
type Mutation {
increment_counter: MutationResp
}
type MutationResp {
new_count: Int
}
`;
export default typeDefs;

View File

@ -1,53 +0,0 @@
import fetch from 'node-fetch';
import { ApolloServer } from 'apollo-server';
import { mergeSchemas, makeExecutableSchema } from 'graphql-tools';
import { getRemoteSchema } from './utils';
import typeDefs from './customTypeDefs';
import resolvers from './customResolvers';
const HASURA_GRAPHQL_ENGINE_URL = process.env.HASURA_GRAPHQL_ENGINE_URL || `https://bazookaand.herokuapp.com`;
const HASURA_GRAPHQL_API_URL = HASURA_GRAPHQL_ENGINE_URL + '/v1alpha1/graphql';
const ACCESS_KEY = process.env.X_HASURA_ACCESS_KEY;
const runServer = async () => {
// make Hasura schema
const executableHasuraSchema = await getRemoteSchema(
HASURA_GRAPHQL_API_URL,
ACCESS_KEY && { 'x-hasura-access-key': ACCESS_KEY }
);
// make executable schema out of custom resolvers and typedefs
const executableCustomSchema = makeExecutableSchema({
typeDefs,
resolvers,
});
// merge custom resolvers with Hasura schema
const finalSchema = mergeSchemas({
schemas: [
executableCustomSchema,
executableHasuraSchema,
]
});
// instantiate a server instance
const server = new ApolloServer({
schema: finalSchema,
introspection: true,
playground: true
});
// run the server
server.listen({
port: process.env.PORT || 4000
}).then(({url}) => {
console.log('Server running. Open ' + url + ' to run queries.');
});
}
try {
runServer();
} catch (e) {
console.log(e, e.message, e.stack);
}

View File

@ -1,62 +0,0 @@
import { HttpLink } from 'apollo-link-http';
import { WebSocketLink } from 'apollo-link-ws';
import { SubscriptionClient } from 'subscriptions-transport-ws';
import ws from 'ws';
import { makeRemoteExecutableSchema, introspectSchema } from 'graphql-tools';
import fetch from 'node-fetch';
import { split } from 'apollo-link';
import { getMainDefinition } from 'apollo-utilities';
const { HASURA_GRAPHQL_ENGINE_AUTH_HOOK } = process.env;
// util function to fetch and create remote schema
export const getRemoteSchema = async (uri, headers) => {
const link = makeHttpAndWsLink(uri, headers);
const schema = await introspectSchema(link);
return makeRemoteExecutableSchema({
schema,
link
});
};
/* create an apollo-link instance that makes
WS connection for subscriptions and
HTTP connection for queries andm utations
*/
const makeHttpAndWsLink = (uri, headers) => {
// Create an http link:
const httpLink = new HttpLink({
uri,
fetch,
headers
});
// Create a WebSocket link:
const wsLink = new WebSocketLink(new SubscriptionClient(
uri,
{
reconnect: true,
connectionParams: {
headers
}
},
ws
));
// chose the link to use based on operation
const link = split(
// split based on operation type
({ query }) => {
const { kind, operation } = getMainDefinition(query);
return kind === 'OperationDefinition' && operation === 'subscription';
},
wsLink,
httpLink,
);
return link;
};

View File

@ -20,8 +20,3 @@ Some of the language/platforms are work in progress. We welcome contributions fo
1. AWS account with billing enabled
2. Hasura GraphQL Engine
### AWS setup
You need to create a corresponding AWS Lambda for each of these examples.
As the Hasura event system takes webhooks as the event triggers, we need to expose these Lambdas as webhooks. To do that, we need to add the API gateway trigger to each lambda and add an API to it.

View File

@ -0,0 +1,7 @@
# Sample boilerplates for Hasura Event Triggers
[echo](echo/): echo the trigger payload.
Helps understanding the event payload and how to parse data.
[mutation](mutation/): insert related data on an insert event using graphql mutation.
Helps understanding how to interact with database using GraphQL in the event trigger.

View File

@ -0,0 +1,7 @@
# Sample boilerplates for Hasura Event Triggers
[echo](echo/): echo the trigger payload.
Helps understanding the event payload and how to parse data.
[mutation](mutation/): insert related data on an insert event using graphql mutation.
Helps understanding how to interact with database using GraphQL in the event trigger.

View File

@ -0,0 +1,7 @@
# Sample boilerplates for Hasura Event Triggers
[echo](echo/): echo the trigger payload.
Helps understanding the event payload and how to parse data.
[mutation](mutation/): insert related data on an insert event using graphql mutation.
Helps understanding how to interact with database using GraphQL in the event trigger.

View File

@ -0,0 +1,7 @@
# Sample boilerplates for Hasura Event Triggers
[echo](echo/): echo the trigger payload.
Helps understanding the event payload and how to parse data.
[mutation](mutation/): insert related data on an insert event using graphql mutation.
Helps understanding how to interact with database using GraphQL in the event trigger.

View File

@ -0,0 +1,7 @@
# Sample boilerplates for Hasura Event Triggers
[echo](echo/): echo the trigger payload.
Helps understanding the event payload and how to parse data.
[mutation](mutation/): insert related data on an insert event using graphql mutation.
Helps understanding how to interact with database using GraphQL in the event trigger.

View File

@ -0,0 +1,7 @@
# Sample boilerplates for Hasura Event Triggers
[echo](echo/): echo the trigger payload.
Helps understanding the event payload and how to parse data.
[mutation](mutation/): insert related data on an insert event using graphql mutation.
Helps understanding how to interact with database using GraphQL in the event trigger.

View File

@ -0,0 +1,7 @@
# Sample boilerplates for Hasura Event Triggers
[echo](echo/): echo the trigger payload.
Helps understanding the event payload and how to parse data.
[mutation](mutation/): insert related data on an insert event using graphql mutation.
Helps understanding how to interact with database using GraphQL in the event trigger.

View File

@ -0,0 +1,7 @@
# Sample boilerplates for Hasura Event Triggers
[echo](echo/): echo the trigger payload.
Helps understanding the event payload and how to parse data.
[mutation](mutation/): insert related data on an insert event using graphql mutation.
Helps understanding how to interact with database using GraphQL in the event trigger.

View File

@ -0,0 +1,66 @@
package main
import (
"encoding/json"
"fmt"
"net/http"
)
type HasuraEvent struct {
ID string `json:"id"`
Event `json:"event"`
Table `json:"table"`
Trigger `json:"trigger"`
}
type Event struct {
Op string `json:"op"`
Data `json:"data"`
}
type Data struct {
Old map[string]interface{} `json:"old"`
New map[string]interface{} `json:"new"`
}
type Table struct {
Name string `json:"name"`
Schema string `json:"schema"`
}
type Trigger struct {
ID string `json:"id"`
Name string `json:"name"`
}
type TriggerResponse struct {
Message string `json:"message"`
OldData map[string]interface{} `json:"oldData"`
NewData map[string]interface{} `json:"newData"`
}
func Handler(w http.ResponseWriter, r *http.Request) {
decoder := json.NewDecoder(r.Body)
var event HasuraEvent
err := decoder.Decode(&event)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
response := TriggerResponse{
Message: fmt.Sprintf(
"got '%s' for '%s' operation on '%s' table in '%s' schema from '%s' trigger",
event.ID,
event.Event.Op,
event.Table.Name,
event.Table.Schema,
event.Trigger.Name,
),
OldData: event.Data.Old,
NewData: event.Data.New,
}
err = json.NewEncoder(w).Encode(response)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}

View File

@ -0,0 +1,5 @@
{
"project": "zeit-echo",
"version": 2,
"builds": [{ "src": "*.go", "use": "@now/go" }]
}

View File

@ -0,0 +1,98 @@
package main
import (
"bytes"
"encoding/json"
"net/http"
"os"
)
type HasuraEvent struct {
ID string `json:"id"`
Event `json:"event"`
Table `json:"table"`
Trigger `json:"trigger"`
}
type Event struct {
Op string `json:"op"`
Data `json:"data"`
}
type Data struct {
Old map[string]interface{} `json:"old"`
New map[string]interface{} `json:"new"`
}
type Table struct {
Name string `json:"name"`
Schema string `json:"schema"`
}
type Trigger struct {
ID string `json:"id"`
Name string `json:"name"`
}
const MUTATION_UPDATE_NOTE_REVISION = `
mutation updateNoteRevision ($object: note_revision_insert_input!) {
insert_note_revision (objects: [$object]) {
affected_rows
returning {
id
}
}
}
`
var HGE_ENDPOINT = os.Getenv("HGE_ENDPOINT")
func Handler(w http.ResponseWriter, r *http.Request) {
decoder := json.NewDecoder(r.Body)
var event HasuraEvent
err := decoder.Decode(&event)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
note_id, ok := event.Data.Old["id"]
if !ok {
http.Error(w, "invalid payload: note id not found", http.StatusBadRequest)
return
}
note, ok := event.Data.New["note"]
if !ok {
http.Error(w, "invalid payload: note not found", http.StatusBadRequest)
return
}
// execute the mutation
payload := map[string]interface{}{
"query": MUTATION_UPDATE_NOTE_REVISION,
"variables": map[string]interface{}{
"object": map[string]interface{}{
"note_id": note_id.(float64),
"note": note.(string),
},
},
}
b := new(bytes.Buffer)
json.NewEncoder(b).Encode(payload)
res, err := http.Post(HGE_ENDPOINT, "application/json; charset=utf-8", b)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
defer res.Body.Close()
var response map[string]interface{}
err = json.NewDecoder(res.Body).Decode(&response)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
err = json.NewEncoder(w).Encode(response)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}

View File

@ -0,0 +1,5 @@
{
"project": "zeit-mutation",
"version": 2,
"builds": [{ "src": "*.go", "use": "@now/go" }]
}

View File

@ -0,0 +1,7 @@
# Sample boilerplates for Hasura Event Triggers
[echo](echo/): echo the trigger payload.
Helps understanding the event payload and how to parse data.
[mutation](mutation/): insert related data on an insert event using graphql mutation.
Helps understanding how to interact with database using GraphQL in the event trigger.

View File

@ -0,0 +1,5 @@
{
"project": "zeit-echo",
"version": 2,
"builds": [{ "src": "index.js", "use": "@now/node" }]
}

View File

@ -0,0 +1,5 @@
{
"project": "zeit-mutation",
"version": 2,
"builds": [{ "src": "index.js", "use": "@now/node" }]
}

Some files were not shown because too many files have changed in this diff Show More