This is my CS50p Final Project! A Chat Bot with GUI and Terminal Interface with Ollama and Chroma DB.
Go to file
Falko Victor Habel febdf30355 [BugFix] Related to too many imports 2024-06-10 21:37:42 +02:00
config standard config file 2024-05-20 15:58:52 +02:00
scripts [BugFix] Related to too many imports 2024-06-10 21:37:42 +02:00
tests missing folders added for testing 2024-05-20 13:21:24 +02:00
theme added havard theme made with customtkinter themeMaker 2024-05-18 21:02:58 +02:00
.gitignore Initial commit 2024-05-18 18:49:57 +00:00
LICENSE added Licence 2024-05-18 21:02:41 +02:00
README.md added Readme File with youtube link 2024-05-20 15:58:27 +02:00
project.py added better documentation 2024-05-20 13:24:27 +02:00
requirements.txt [BugFix] Related to too many imports 2024-06-10 21:37:42 +02:00
test_project.py fixed bug related to not given config 2024-05-19 14:03:20 +02:00

README.md

Fabelous-Ai-Chat

Video Demo: https://youtu.be/A5mtqQLF000

Description:

This is my Final Project for CS50P. A implementation of a RAG System using Chroma DB and Ollama for the backend, along with customtkinter for the frontend.

Features

  1. Custom GUI built with customtkinter and standard tkinter user interface.
    • optional terminal mode, with nice colors provided by the termcolor libary
  2. RAG system to retrieve and process files from various sources (CSV, HTML, MD, PDF and Webpages) using ChromaDB.
  3. Seamless integration with Ollama API for generating responses based on the input text.
  4. Supports both context-based and context-free conversations.
  5. Code Highlighting: Either highlighted in red within the terminal or displayed in a separate widget using the GUI, this improves the readability and understanding of code snippets

Installation

  1. Clone the repository or download the project files from: https://gitea.fabelous.app/fabel/Fabelous-Ai-Chat
  2. Install required dependencies by running:
pip install -r requirements.txt
  1. To run this project properly, you will need to ensure that the Python 3 and Tkinter libraries arei nstalled.
  • For MacOS users who have installed Python 3 via Homebrew, use the following command in your terminal:
brew install python-tk
  • For Linux users, execute the following command in your terminal:
apt-get install python3-tk

Configuration

Please note that you need access to an Ollama instance with both embeddings model and chat model to use this interface effectively. Ensure that you have set up your instance accordingly before configuring this application.

To configure the Fabelous-Ai-Chat, run the following command and answer the prompts accordingly:

python project.py --config

If you just want to switch for example from GUI to terminal version type:

python project.py -m terminal

During configuration, you'll be prompted to enter the following information:

  • Mode: gui or terminal
  • Base Ollama URL for the LLM e.g.: http://localhost:11434
  • Base Ollama URL for embeddings e.g.: http://localhost:11434
  • The base model name e.g: mistral
  • The base embeddings model name e.g: mxbai-embed-large
  • Authentication headers for base and embeddings models (if required)
    • a Authentication could look like this: {"Authorization": "Token xzy"}

After providing this information, your configuration file will be saved in the config folder as config.json.

Usage in Terminal Mode

To use the Fabelous-Ai-Chat in terminal mode, run the following command and provide a prompt:

python project.py -p "Your prompt here"

You can also provide an input file/link using the -f option:

python project.py -p "Your prompt here" -f "path/to/yourfile.csv"

Usage in GUI Mode

To use the Fabelous-Ai-Chat in GUI mode, simply run the following command:

python project.py

Supported files

You can provide the following files / links as context to Ollama:

  • .csv
  • .html
  • .md
  • .pdf
  • links (Note that only SSL-secured websites are supported)

How the Rag is working in the GUI:

The RAG is designed to create conversational agents that leverage external sources for more accurate responses. The Rag class in Rag.py plays a crucial role in this process. When a user submits a message through the chat interface, the get_response_from_ollama method comes into play. This method checks for any context associated with the message and processes it if required.

If necessary, it calls the receive_data method of the Rag instance to handle new context and fetch relevant data using the get_file method. The fetched content is then added to a database, which is not persistent. If you close it, you will lose the data. Following that, the get_request method of the Rag instance is called with the user's message as input. It generates a response, utilizing the contents of the Chroma DB for better accuracy, with the selected model from the Ollama instance. For better results, the last five responses are featured in the context, if no other context is provided.

Difference to terminal mode:

In terminal mode, context is provided through command-line arguments as a single file. This approach has its limitations, as it only allows for a single context file during a conversation, restricting versatility when dealing with diverse contexts.

License

Fabelous-Ai-Chat is made available under the MIT license. For more details, please refer to the LICENSE file included in the distribution.

This project uses the models from Mistral AI and Mixedbread.ai as standard, which are made available under the Apache 2.0 license. For more details, check out there model pages on Ollama: Mistral and mxbai-embed-large. This project also uses ChromaDB as the vector database. It is also available under the Apache 2.0 License. For more information checkout it's GitHub Page.