--- title: 'Build Your Own Personal Twitter Agent šŸ§ šŸ¦ā›“ with LangChain' authors: [vinny] image: /img/build-your-own-twitter-agent/twitter-agent-logo.png tags: [wasp, ai, gpt, langchain, fullstack, node, react, agent] --- import Link from '@docusaurus/Link'; import useBaseUrl from '@docusaurus/useBaseUrl'; import InBlogCta from './components/InBlogCta'; import WaspIntro from './_wasp-intro.md'; import ImgWithCaption from './components/ImgWithCaption' ## TL;DR [LangChain](https://js.langchain.com), ChatGPT, and other emerging technology have made it possible to build some really creative tools. In this tutorial, weā€™ll build a full-stack web app that acts as our own personal Twitter Agent, or ā€œinternā€, as I like to call it. It keeps track of your notes and ideas, and uses them ā€” along with tweets from trending-setting twitter users ā€” to brainstorm new ideas and write tweet drafts for you! šŸ’„ BTW, If you get stuck during the tutorial, or at any point just want to check out the full, final repo of the app we're building, here it is: https://github.com/vincanger/twitter-intern ## Before We Begin [Wasp = }](https://wasp-lang.dev) is the only open-source, completely serverful fullstack React/Node framework with a built in compiler that lets you build your app in a day and deploy with a single CLI command. Weā€™re working hard to help you build performant web apps as easily as possibly ā€” including making these tutorials, which are released weekly! We would be super grateful if you could help us out by starring our repo on GitHub: [https://www.github.com/wasp-lang/wasp](https://www.github.com/wasp-lang/wasp) šŸ™ ![https://media2.giphy.com/media/d0Pkp9OMIBdC0/giphy.gif?cid=7941fdc6b39mgj7h8orvi0f4bjebceyx4gj0ih1xb6s05ujc&ep=v1_gifs_search&rid=giphy.gif&ct=g](https://media2.giphy.com/media/d0Pkp9OMIBdC0/giphy.gif?cid=7941fdc6b39mgj7h8orvi0f4bjebceyx4gj0ih1xb6s05ujc&ep=v1_gifs_search&rid=giphy.gif&ct=g) ā€¦e*ven Ron would star [Wasp on GitHub](https://www.github.com/wasp-lang/wasp)* šŸ¤© ## Background Twitter is a great marketing tool. Itā€™s also a great way to explore ideas and refine your own. But it can be time-consuming and difficult to maintain a tweeting habit. ![https://media0.giphy.com/media/WSrR5xkvljaFMe7UPo/giphy.gif?cid=7941fdc6g9o3drj567dbwyuo1c66x76eq8awc2r1oop8oypl&ep=v1_gifs_search&rid=giphy.gif&ct=g](https://media0.giphy.com/media/WSrR5xkvljaFMe7UPo/giphy.gif?cid=7941fdc6g9o3drj567dbwyuo1c66x76eq8awc2r1oop8oypl&ep=v1_gifs_search&rid=giphy.gif&ct=g) Thatā€™s why I decided to build my own personal twitter agent with [LangChain](https://js.langchain.com) on the basis of these assumptions: šŸ§ Ā LLMs (like ChatGPT) arenā€™t the best writers, but they ARE great at brainstorming new ideas. šŸ“ŠĀ Certain twitter users drive the majority of discourse within certain niches, i.e. trend-setters influence whatā€™s being discussed at the moment. šŸ’”Ā the Agent needs context in order to generate ideas relevant to YOU and your opinions, so it should have access to your notes, ideas, tweets, etc. So instead of trying to build a fully autonomous agent that does the tweeting for you, I thought it would be better to build an agent that does the BRAINSTORMING for you, based on your favorite trend-setting twitter users as well as your own ideas. Imagine it like an intern that does the grunt work, while you do the curating! ![https://media.giphy.com/media/26DNdV3b6dqn1jzR6/giphy.gif](https://media.giphy.com/media/26DNdV3b6dqn1jzR6/giphy.gif) In order to accomplish this, we need to take advantage of a few hot AI tools: - Embeddings and Vector Databases - LLMs (Large Language Models), such as ChatGPT - LangChain and sequential ā€œchainsā€ of LLM calls Embeddings and Vector Databases give us a powerful way to perform similarity searches on our own notes and ideas. If youā€™re not familiar with [similarity search](https://www.pinecone.io/learn/what-is-similarity-search/), the simplest way to describe what similarity search is by comparing it to a normal google search. In a normal search, the phrase ā€œa mouse eats cheeseā€ will return results with a combination of **those** **words** **only**. But a vector-based similarity search, on the other hand, would return those words, as well as results with related words such as ā€œdogā€, ā€œcatā€, ā€œboneā€, and ā€œfishā€. You can see why thatā€™s so powerful, because if we have non-exact but related notes, our similarity search will still return them! ![https://media2.giphy.com/media/xUySTD7evBn33BMq3K/giphy.gif?cid=7941fdc6273if8qfk83gbnv8uabc4occ0tnyzk0g0gfh0qg5&ep=v1_gifs_search&rid=giphy.gif&ct=g](https://media2.giphy.com/media/xUySTD7evBn33BMq3K/giphy.gif?cid=7941fdc6273if8qfk83gbnv8uabc4occ0tnyzk0g0gfh0qg5&ep=v1_gifs_search&rid=giphy.gif&ct=g) For example, if our favorite trend-setting twitter user makes a post about the benefits of typescript, but we only have a note on ā€œour favorite React hooksā€, our similarity search would still likely return such a result. And thatā€™s huge! Once we get those notes, we can pass them to the ChatGPT completion API along with a prompt to generate more ideas. The result from this prompt will then be sent to another prompt with instructions to generate a draft tweet. We save these sweet results to our Postgres relational database. This ā€œchainā€ of prompting is essentially where the LangChain package gets its name šŸ™‚ ![The flow of information through the app](../static/img/build-your-own-twitter-agent/Untitled.png) This approach will give us a wealth of new ideas and tweet drafts related to our favorite trend-setting twitter usersā€™ tweets. We can look through these, edit and save our favorite ideas to our ā€œnotesā€ vector store, or maybe send off some tweets. Iā€™ve personally been using this app for a while now, and not only has it generated some great ideas, but it also helps to inspire new ones (even if some of the ideas it generates are ā€œmehā€), which is why I included an ā€œAdd Noteā€ feature front and center to the nav bar ![twitter-agent-add-note.png](../static/img/build-your-own-twitter-agent/twitter-agent-add-note.png) Ok. Enough background. Letā€™s start building your own personal twitter intern! šŸ¤– BTW, if you get stuck at all while following the tutorial, you can always reference this tutorialā€™s repo, which has the finished app: [Twitter Intern GitHub Repo](https://github.com/vincanger/twitter-intern) ## Configuration ### Set up your Wasp project Weā€™re going to make this a full-stack React/NodeJS web app so we need to get that set up first. But donā€™t worry, it wonā€™t take long AT ALL, because we will be using Wasp as the framework. Wasp does all the heavy lifting for us. Youā€™ll see what I mean in a second. ```bash # First, install Wasp by running this in your terminal: curl -sSL https://get.wasp-lang.dev/installer.sh | sh # next, create a new project: wasp new twitter-agent # cd into the new directory and start the project: cd twitter-agent && wasp start ``` Great! When running `wasp start`, Wasp will install all the necessary npm packages, start our server on port 3001, and our React client on port 3000. Head to [localhost:3000](http://localhost:3000) in your browser to check it out. ![Untitled](../static/img/build-your-own-twitter-agent/Untitled%201.png) :::tip Tip ā„¹ļø you can install the [Wasp vscode extension](https://marketplace.visualstudio.com/items?itemName=wasp-lang.wasp) for the best developer experience. ::: Youā€™ll notice Wasp sets up your full-stack app with a file structure like so: ```bash . ā”œā”€ā”€ main.wasp # The wasp config file. ā””ā”€ā”€ src Ā Ā  ā”œā”€ā”€ client # Your React client code (JS/CSS/HTML) goes here. Ā Ā  ā”œā”€ā”€ server # Your server code (Node JS) goes here. Ā Ā  ā””ā”€ā”€ shared # Your shared (runtime independent) code goes here. ``` Letā€™s start adding some server-side code. ### Server-Side & Database Entities Start by adding a `.env.server` file in the root directory of your project: ```bash # https://platform.openai.com/account/api-keys OPENAI_API_KEY= # sign up for a free tier account at https://www.pinecone.io/ PINECONE_API_KEY= # will be a location, e.g 'us-west4-gcp-free' PINECONE_ENV= # We will fill these in later during the Twitter Scraping section # Twitter details -- only needed once for Rettiwt.account.login() to get the tokens TWITTER_EMAIL= TWITTER_HANDLE= TWITTER_PASSWORD= # TOKENS -- fill these in after running the getTwitterTokens script in the Twitter Scraping section KDT= TWID= CT0= AUTH_TOKEN= ``` We need a way for us to store all our great ideas. So letā€™s first head to [Pinecone.io](http://Pinecone.io) and set up a free trial account. ![Untitled](../static/img/build-your-own-twitter-agent/Untitled%202.png) In the Pinecone dashboard, go to API keys and create a new one. Copy and paste your `Environment` and `API Key` into `.env.server` Do the same for OpenAI, by creating an account and key at [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys) Now letā€™s replace the contents of the `main.wasp` config file, which is like the ā€œskeletonā€ of your app, with the code below. This will configure most of the fullstack app for you šŸ¤Æ ```tsx app twitterAgent { wasp: { version: "^0.10.6" }, title: "twitter-agent", head: [ "" ], db: { system: PostgreSQL, }, auth: { userEntity: User, onAuthFailedRedirectTo: "/login", methods: { usernameAndPassword: {}, } }, dependencies: [ ("openai", "3.2.1"), ("rettiwt-api", "1.1.8"), ("langchain", "0.0.91"), ("@pinecone-database/pinecone", "0.1.6"), ("@headlessui/react", "1.7.15"), ("react-icons", "4.8.0"), ("react-twitter-embed", "4.0.4") ], } // ### Database Models entity Tweet {=psl id Int @id @default(autoincrement()) tweetId String authorUsername String content String tweetedAt DateTime @default(now()) user User @relation(fields: [userId], references: [id]) userId Int drafts TweetDraft[] ideas GeneratedIdea[] psl=} entity TweetDraft {=psl id Int @id @default(autoincrement()) content String notes String originalTweet Tweet @relation(fields: [originalTweetId], references: [id]) originalTweetId Int createdAt DateTime @default(now()) user User @relation(fields: [userId], references: [id]) userId Int psl=} entity GeneratedIdea {=psl id Int @id @default(autoincrement()) content String createdAt DateTime @default(now()) updatedAt DateTime @default(now()) user User @relation(fields: [userId], references: [id]) userId Int originalTweet Tweet? @relation(fields: [originalTweetId], references: [id]) originalTweetId Int? isEmbedded Boolean @default(false) psl=} entity User {=psl id Int @id @default(autoincrement()) username String @unique password String createdAt DateTime @default(now()) favUsers String[] originalTweets Tweet[] tweetDrafts TweetDraft[] generatedIdeas GeneratedIdea[] psl=} // <<< Client Pages & Routes route RootRoute { path: "/", to: MainPage } page MainPage { authRequired: true, component: import Main from "@client/MainPage" } //... ``` :::note You might have noticed this `{=psl psl=}` syntax in the entities above. This denotes that anything in between these `psl` brackets is actually a different language, in this case, [Prisma Schema Language](https://www.prisma.io/docs/concepts/components/prisma-schema). Wasp uses Prisma under the hood, so if you've used Prisma before, it should be straightforward. ::: As you can see, our `main.wasp` config file has our: - dependencies, - authentication method, - database type, and - database models (ā€entitiesā€) With this, our app structure is mostly defined and Wasp will take care of a ton of configuration for us. ### Database Setup But we still need to get a postgres database running. Usually this can be pretty annoying, but with Wasp, just have [Docker Deskop](https://www.docker.com/products/docker-desktop/) installed and running, then open up **another separate terminal tab/window** and then run: ```bash wasp start db ``` This will start and connect your app to a Postgres database for you. No need to do anything else! šŸ¤ÆĀ Just leave this terminal tab, along with docker desktop, open and running in the background. In a different terminal tab, run: ```bash wasp db migrate-dev ``` and make sure to give your database migration a name. If you stopped the wasp dev server to run this command, go ahead and start it again with `wasp start`. At this point, our app will be navigating us to [localhost:3000/login](http://localhost:3000/login) but because we havenā€™t implemented a login screen/flow yet, we will be seeing a blank screen. Donā€™t worry, weā€™ll get to that. ## Embedding Ideas & Notes ### Server Action First though, in the `main.wasp` config file, letā€™s define a server action for saving notes and ideas. Go ahead and add the code below to the bottom of the file: ```tsx // main.wasp //... // <<< Client Pages & Routes route RootRoute { path: "/", to: MainPage } page MainPage { authRequired: true, component: import Main from "@client/MainPage" } // !!! Actions action embedIdea { fn: import { embedIdea } from "@server/ideas.js", entities: [GeneratedIdea] } ``` With the action declared, letā€™s create it. Make a new file, `.src/server/ideas.ts` in and add the following code: ```tsx import type { EmbedIdea } from '@wasp/actions/types'; import type { GeneratedIdea } from '@wasp/entities'; import HttpError from '@wasp/core/HttpError.js'; import { PineconeStore } from 'langchain/vectorstores/pinecone'; import { Document } from 'langchain/document'; import { OpenAIEmbeddings } from 'langchain/embeddings/openai'; import { PineconeClient } from '@pinecone-database/pinecone'; const pinecone = new PineconeClient(); export const initPinecone = async () => { await pinecone.init({ environment: process.env.PINECONE_ENV!, apiKey: process.env.PINECONE_API_KEY!, }); return pinecone; }; export const embeddings = new OpenAIEmbeddings({ openAIApiKey: process.env.OPENAI_API_KEY, }); /** * Embeds a single idea into the vector store */ export const embedIdea: EmbedIdea<{ idea: string }, GeneratedIdea> = async ({ idea }, context) => { if (!context.user) { throw new HttpError(401, 'User is not authorized'); } console.log('idea: ', idea); try { let newIdea = await context.entities.GeneratedIdea.create({ data: { content: idea, userId: context.user.id, }, }); if (!newIdea) { throw new HttpError(404, 'Idea not found'); } const pinecone = await initPinecone(); // we need to create an index to save the vector embeddings to // an index is similar to a table in relational database world const availableIndexes = await pinecone.listIndexes(); if (!availableIndexes.includes('embeds-test')) { console.log('creating index'); await pinecone.createIndex({ createRequest: { name: 'embeds-test', // open ai uses 1536 dimensions for their embeddings dimension: 1536, }, }); } const pineconeIndex = pinecone.Index('embeds-test'); // the LangChain vectorStore wrapper const vectorStore = new PineconeStore(embeddings, { pineconeIndex: pineconeIndex, namespace: context.user.username, }); // create a document with the idea's content to be embedded const ideaDoc = new Document({ metadata: { type: 'note' }, pageContent: newIdea.content, }); // add the document to the vectore store along with its id await vectorStore.addDocuments([ideaDoc], [newIdea.id.toString()]); newIdea = await context.entities.GeneratedIdea.update({ where: { id: newIdea.id, }, data: { isEmbedded: true, }, }); console.log('idea embedded successfully!', newIdea); return newIdea; } catch (error: any) { throw new Error(error); } }; ``` :::info Weā€™ve defined the action function in our `main.wasp` file as coming from ā€˜@server/ideas.jsā€™ but weā€™re creating an `ideas.ts` file. What's up with that?! Well, Wasp internally usesĀ `esnext`Ā module resolution, which always requires specifying the extension asĀ `.js`Ā (i.e., the extension used in the emitted JS file). This applies to allĀ `@server`Ā imports (and files on the server in general). It does not apply to client files. ::: Great! Now we have a server action for adding notes and ideas to our vector database. And we didnā€™t even have to configure a server ourselves (thanks, Wasp šŸ™‚). Let's take a step back and walk through the code we just wrote though: 1. We create a new Pinecone client and initialize it with our API key and environment. 2. We create a new OpenAIEmbeddings client and initialize it with our OpenAI API key. 3. We create a new index in our Pinecone database to store our vector embeddings. 4. We create a new PineconeStore, which is a LangChain wrapper around our Pinecone client and our OpenAIEmbeddings client. 5. We create a new Document with the ideaā€™s content to be embedded. 6. We add the document to the vector store along with its id. 7. We also update the idea in our Postgres database to mark it as embedded. Now we want to create the client-side functionality for adding ideas, but youā€™ll remember we defined an `auth` object in our wasp config file. So weā€™ll need to add the ability to log in before we do anything on the frontend. ### Authentication Letā€™s add that quickly by adding a new a Route and Page definition to our `main.wasp` file ```tsx //... route LoginPageRoute { path: "/login", to: LoginPage } page LoginPage { component: import Login from "@client/LoginPage" } ``` ā€¦and create the file `src/client/LoginPage.tsx` with the following content: ```tsx import { LoginForm } from '@wasp/auth/forms/Login'; import { SignupForm } from '@wasp/auth/forms/Signup'; import { useState } from 'react'; export default () => { const [showSignupForm, setShowSignupForm] = useState(false); const handleShowSignupForm = () => { setShowSignupForm((x) => !x); }; return ( <> {showSignupForm ? : }
{showSignupForm ? 'Already Registered? Login!' : 'No Account? Sign up!'}
); }; ``` :::info In the `auth` object on the `main.wasp` file, we used the `usernameAndPassword` method which is the simplest form of auth Wasp offers. If youā€™re interested, [Wasp](https://wasp-lang.dev/docs) does provide abstractions for Google, Github, and Email Verified Authentication, but we will stick with the simplest auth for this tutorial. ::: With authentication all set up, if we try to go to [localhost:3000](http://localhost:3000) we will be automatically directed to the login/register form. Youā€™ll see that Wasp creates Login and Signup forms for us because of the `auth` object we defined in the `main.wasp` file. Sweet! šŸŽ‰ But even though weā€™ve added some style classes, we havenā€™t set up any css styling so it will probably be pretty ugly right about now. šŸ¤¢Ā Barf. ![Untitled](../static/img/build-your-own-twitter-agent/Untitled%203.png) ### Adding Tailwind CSS Luckily, Wasp comes with tailwind css support, so all we have to do to get that working is add the following files in the root directory of the project: ```bash . ā”œā”€ā”€ main.wasp ā”œā”€ā”€ src ā”‚ ā”œā”€ā”€ client ā”‚ ā”œā”€ā”€ server ā”‚ ā””ā”€ā”€ shared ā”œā”€ā”€ postcss.config.cjs # add this file here ā”œā”€ā”€ tailwind.config.cjs # and this here too ā””ā”€ā”€ .wasproot ``` `postcss.config.cjs` ```jsx module.exports = { plugins: { tailwindcss: {}, autoprefixer: {}, }, }; ``` `tailwind.config.cjs` ```jsx /** @type {import('tailwindcss').Config} */ module.exports = { content: ['./src/**/*.{js,jsx,ts,tsx}'], theme: { extend: {}, }, plugins: [], }; ``` Finally, replace the contents of your `src/client/Main.css` file with these lines: ```css @tailwind base; @tailwind components; @tailwind utilities; ``` Now weā€™ve got the magic of [tailwind css](https://tailwindcss.com/) on our sides! šŸŽØĀ Weā€™ll get to styling later though. Patience, young grasshopper. ### Adding Notes Client-side From here, letā€™s create the complimentary client-side components for adding notes to the vector store. Create a new `.src/client/AddNote.tsx` file with the following contents: ```tsx import { useState } from 'react'; import embedIdea from '@wasp/actions/embedIdea'; export default function AddNote() { const [idea, setIdea] = useState(''); const [isIdeaEmbedding, setIsIdeaEmbedding] = useState(false); const handleEmbedIdea = async (e: any) => { try { setIsIdeaEmbedding(true); if (!idea) { throw new Error('Idea cannot be empty'); } const embedIdeaResponse = await embedIdea({ idea, }); console.log('embedIdeaResponse: ', embedIdeaResponse); } catch (error: any) { alert(error.message); } finally { setIdea(''); setIsIdeaEmbedding(false); } }; return (