Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) 🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, that you can share with users ! Local & Private alternative
Go to file
Stan Girard f952d7a269
New Webapp migration (#56)
* feat(v2): loaders added

* feature: Add scroll animations

* feature: upload ui

* feature: upload multiple files

* fix: Same file name and size remove

* feat(crawler): added

* feat(parsers): v2 added more

* feat(v2): audio now working

* feat(v2): all loaders

* feat(v2): explorer

* chore: add links

* feat(api): added status in return message

* refactor(website): remove old code

* feat(upload): return type for messages

* feature: redirect to upload if ENV=local

* fix(chat): fixed some issues

* feature: respect response type

* loading state

* feature: Loading stat

* feat(v2): added explore and chat pages

* feature: modal settings

* style: Chat UI

* feature: scroll to bottom when chatting

* feature: smooth scroll in chat

* feature(anim): Slide chat in

* feature: markdown chat

* feat(explorer): list

* feat(doc): added document item

* feat(explore): added modal

* Add clarification on Project API keys and web interface for migration scripts to Readme (#58)

* fix(demo): changed link

* add support to uploading zip file (#62)

* Catch UnicodeEncodeError exception (#64)

* feature: fixed chatbar

* fix(loaders): missing argument

* fix: layout

* fix: One whole chatbox

* fix: Scroll into view

* fix(build): vercel issues

* chore(streamlit): moved to own file

* refactor(api): moved to backend folder

* feat(docker): added docker compose

* Fix a bug where langchain memories were not being cleaned (#71)

* Update README.md (#70)

* chore(streamlit): moved to own file

* refactor(api): moved to backend folder

* docs(readme): updated for new version

* docs(readme): added old readme

* docs(readme): update copy dot env file

* docs(readme): cleanup

---------

Co-authored-by: iMADi-ARCH <nandanaditya985@gmail.com>
Co-authored-by: Matt LeBel <github@lebel.io>
Co-authored-by: Evan Carlson <45178375+EvanCarlson@users.noreply.github.com>
Co-authored-by: Mustafa Hasan Khan <65130881+mustafahasankhan@users.noreply.github.com>
Co-authored-by: zhulixi <48713110+zlxxlz1026@users.noreply.github.com>
Co-authored-by: Stanisław Tuszyński <stanislaw@tuszynski.me>
2023-05-21 01:20:55 +02:00
.github/workflows feat(releaseplease): added 2023-05-16 16:25:08 +02:00
.vscode Support for Anthropics Models 2023-05-14 01:30:03 -07:00
backend New Webapp migration (#56) 2023-05-21 01:20:55 +02:00
frontend New Webapp migration (#56) 2023-05-21 01:20:55 +02:00
streamlit-demo New Webapp migration (#56) 2023-05-21 01:20:55 +02:00
.backend_env.example New Webapp migration (#56) 2023-05-21 01:20:55 +02:00
.frontend_env.example New Webapp migration (#56) 2023-05-21 01:20:55 +02:00
.gitignore New Webapp migration (#56) 2023-05-21 01:20:55 +02:00
components_keys.py add support to uploading zip file (#62) 2023-05-19 23:13:46 +02:00
docker-compose.yml New Webapp migration (#56) 2023-05-21 01:20:55 +02:00
files.py Catch UnicodeEncodeError exception (#64) 2023-05-20 08:51:47 +02:00
LICENSE feat(license): added 2023-05-13 18:12:35 +02:00
logo.png feat(readme): first iteration 2023-05-13 02:02:45 +02:00
README.md New Webapp migration (#56) 2023-05-21 01:20:55 +02:00

Quivr - Your GenerativeAI Second Brain

Quivr-logo

Join our Discord

Quivr is your GenerativeAI second brain, designed to easily store and retrieve unstructured information. It's like Obsidian but powered by generative AI.

Features

  • Store Anything: Quivr can handle almost any type of data you throw at it. Text, images, code snippets, you name it.
  • Generative AI: Quivr uses advanced AI to help you generate and retrieve information.
  • Fast and Efficient: Designed with speed and efficiency in mind. Quivr makes sure you can access your data as quickly as possible.
  • Secure: Your data is always under your control.
  • Compatible Files:
    • Text
    • Markdown
    • PDF
    • Powerpoint
    • Excel
    • Word
    • Audio
    • Video
  • Open Source: Quivr is open source and free to use.

DEMO WITH STREAMLIT IS USING OLD VERSION

New version is using a new UI and is not yet deployed as it doesn't have all the features of the old version. Should be up and live before 25/05/23

Demo with GPT3.5

https://github.com/StanGirard/quivr/assets/19614572/80721777-2313-468f-b75e-09379f694653

Demo with Claude 100k context

https://github.com/StanGirard/quivr/assets/5101573/9dba918c-9032-4c8d-9eea-94336d2c8bd4

Getting Started with the new version

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. Old version readme is in the streamlit-demo folder here

Prerequisites

Make sure you have the following installed before continuing:

  • Docker
  • Docker Compose

You'll also need a Supabase account for:

  • A new Supabase project
  • Supabase Project API key
  • Supabase Project URL

Installing

  • Clone the repository
git clone git@github.com:StanGirard/Quivr.git && cd Quivr
  • Copy the .XXXXX_env files
cp .backend_env.example .backend_env
cp .frontend_env.example .frontend_env
  • Update the .backend_env file

Note that the supabase_service_key is found in your Supabase dashboard under Project Settings -> API. Use the anon public key found in the Project API keys section.

  • Run the following migration scripts on the Supabase database via the web interface (SQL Editor -> New query)
-- Enable the pgvector extension to work with embedding vectors
       create extension vector;

       -- Create a table to store your documents
       create table documents (
       id bigserial primary key,
       content text, -- corresponds to Document.pageContent
       metadata jsonb, -- corresponds to Document.metadata
       embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
       );

       CREATE FUNCTION match_documents(query_embedding vector(1536), match_count int)
           RETURNS TABLE(
               id bigint,
               content text,
               metadata jsonb,
               -- we return matched vectors to enable maximal marginal relevance searches
               embedding vector(1536),
               similarity float)
           LANGUAGE plpgsql
           AS $$
           # variable_conflict use_column
       BEGIN
           RETURN query
           SELECT
               id,
               content,
               metadata,
               embedding,
               1 -(documents.embedding <=> query_embedding) AS similarity
           FROM
               documents
           ORDER BY
               documents.embedding <=> query_embedding
           LIMIT match_count;
       END;
       $$;

and

create table
  stats (
    -- A column called "time" with data type "timestamp"
    time timestamp,
    -- A column called "details" with data type "text"
    chat boolean,
    embedding boolean,
    details text,
    metadata jsonb,
    -- An "integer" primary key column called "id" that is generated always as identity
    id integer primary key generated always as identity
  );
  • Run the app
docker compose build && docker compose up

Built With

  • Python - The programming language used.
  • Supabase - The open source Firebase alternative.

Contributing

Open a pull request and we'll review it as soon as possible.

Star History

Star History Chart