Project information
- Category: LLMs (Large Language Models)
- Project date: February, 2025
- Project URL: github.com/mem-rag
Using RAG to create external memory for LLM
mem-rag is a Retrieval-Augmented Generation (RAG) system that combines LangChain, ChromaDB, and an open-source LLM (`Qwen/Qwen2.5-1.5B-Instruct`) to process and extract relevant content from PDF documents, enabling accurate and context-aware responses to user queries. By leveraging real-time information retrieval, it enhances accuracy, reduces reliance on extensive fine-tuning, and scales efficiently for larger datasets. The project is structured for ease of deployment using Docker and Poetry, making it flexible and adaptable across various domains. Future improvements include support for additional document formats, refined chunking strategies, and a web-based interface.
