Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) 🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, that you can share with users ! Local & Private alternative
Go to file
shaun e08995835a Support for Anthropics Models
This update enhances the "Second Brain" application by adding support for Anthropics AI models. Users can now use not only OpenAI's GPT-3/4, but also Anthropics' Claude models to store and query their knowledge.

Key changes include:

Added an anthropic_api_key field in the secrets configuration file. Introduced a selection for different AI models including GPT-3, GPT-4, and various versions of Claude. Updated question handling to be model-agnostic, and added support for Anthropics' Claude models in the question processing workflow. Modified the streamlit interface to allow users to input their choice of model, control the "temperature" of the model's responses, and set the max tokens limit. Upgraded requirements.txt file with the latest version of the Anthropics library. This update empowers users to leverage different AI models based on their needs, providing a more flexible and robust tool for knowledge management.
2023-05-14 01:30:03 -07:00
.streamlit Support for Anthropics Models 2023-05-14 01:30:03 -07:00
.vscode Support for Anthropics Models 2023-05-14 01:30:03 -07:00
loaders feat(metadata): added file size 2023-05-13 01:12:51 +02:00
.gitignore Add gitignore 2023-05-13 09:30:01 -06:00
2023-05-13-02-16-02.png feat(demo): added 2023-05-13 02:16:41 +02:00
brain.py feat(forget): now able to forget things 2023-05-13 01:30:00 +02:00
Dockerfile fix(requirements): fixed the issue 2023-05-13 16:37:18 +02:00
files.py feat(pdf): added pdf loader 2023-05-13 00:25:12 +02:00
LICENSE feat(license): added 2023-05-13 18:12:35 +02:00
logo.png feat(readme): first iteration 2023-05-13 02:02:45 +02:00
main.py Support for Anthropics Models 2023-05-14 01:30:03 -07:00
question.py Support for Anthropics Models 2023-05-14 01:30:03 -07:00
README.md Update README.md 2023-05-13 19:56:54 +02:00
requirements.txt Support for Anthropics Models 2023-05-14 01:30:03 -07:00
sidebar.py feat(visual): moved things around 2023-05-12 23:58:19 +02:00
utils.py feat(init): init repository 2023-05-12 23:05:31 +02:00

Quiver

quiver-logo

Quiver is your second brain in the cloud, designed to easily store and retrieve unstructured information. It's like Obsidian but powered by generative AI.

Features

  • Store Anything: Quiver can handle almost any type of data you throw at it. Text, images, code snippets, you name it.
  • Generative AI: Quiver uses advanced AI to help you generate and retrieve information.
  • Fast and Efficient: Designed with speed and efficiency in mind. Quiver makes sure you can access your data as quickly as possible.
  • Secure: Your data is stored securely in the cloud and is always under your control.
  • Compatible Files:
    • Text
    • Markdown
    • PDF
    • Audio
    • Video
  • Open Source: Quiver is open source and free to use.

Demo

https://github.com/StanGirard/quiver/assets/19614572/a3cddc6a-ca28-44ad-9ede-3122fa918b51

Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

Prerequisites

What things you need to install the software and how to install them.

  • Python 3.10 or higher
  • Pip
  • Virtualenv
  • Supabase account
  • Supabase API key
  • Supabase URL

Installing

  • Clone the repository
git clone git@github.com:StanGirard/quiver.git & cd quiver
  • Create a virtual environment
virtualenv venv
  • Activate the virtual environment
source venv/bin/activate
  • Install the dependencies
pip install -r requirements.txt
  • Copy the streamlit secrets.toml example file
cp .streamlit/secrets.toml.example .streamlit/secrets.toml
  • Add your credentials to .streamlit/secrets.toml file
supabase_url = "SUPABASE_URL"
supabase_service_key = "SUPABASE_SERVICE_KEY"
openai_api_key = "OPENAI_API_KEY"
  • Run the migration script on the Supabase database via the web interface
-- Enable the pgvector extension to work with embedding vectors
       create extension vector;

       -- Create a table to store your documents
       create table documents (
       id bigserial primary key,
       content text, -- corresponds to Document.pageContent
       metadata jsonb, -- corresponds to Document.metadata
       embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
       );

       CREATE FUNCTION match_documents(query_embedding vector(1536), match_count int)
           RETURNS TABLE(
               id bigint,
               content text,
               metadata jsonb,
               -- we return matched vectors to enable maximal marginal relevance searches
               embedding vector(1536),
               similarity float)
           LANGUAGE plpgsql
           AS $$
           # variable_conflict use_column
       BEGIN
           RETURN query
           SELECT
               id,
               content,
               metadata,
               embedding,
               1 -(documents.embedding <=> query_embedding) AS similarity
           FROM
               documents
           ORDER BY
               documents.embedding <=> query_embedding
           LIMIT match_count;
       END;
       $$;
  • Run the app
streamlit run main.py

Built With

  • Python - The programming language used.
  • Streamlit - The web framework used.
  • Supabase - The open source Firebase alternative.

Contributing

Open a pull request and we'll review it as soon as possible.