No description
![]() - Complete Mem0 OSS integration with hybrid datastore - PostgreSQL + pgvector for vector storage - Neo4j 5.18 for graph relationships - Google Gemini embeddings integration - Comprehensive monitoring with correlation IDs - Real-time statistics and performance tracking - Production-grade observability features - Clean repository with no exposed secrets |
||
---|---|---|
backend | ||
config | ||
docs | ||
frontend | ||
.env.example | ||
.gitignore | ||
docker-compose.yml | ||
MEM0.md | ||
README.md | ||
test_integration.py | ||
TESTING.md |
Mem0 Interface - Production Ready
A fully operational Mem0 interface with PostgreSQL and Neo4j integration, featuring intelligent model routing, comprehensive memory management, and production-grade monitoring.
Features
Core Memory System
- ✅ Mem0 OSS Integration: Complete hybrid datastore (Vector + Graph + KV storage)
- ✅ PostgreSQL + pgvector: High-performance vector embeddings storage
- ✅ Neo4j 5.18: Graph relationships with native vector similarity functions
- ✅ Google Gemini Embeddings: Enterprise-grade embedding generation
- ✅ Memory Operations: Store, search, update, delete memories with semantic search
- ✅ Graph Relationships: Automatic entity extraction and relationship mapping
AI & Model Integration
- ✅ Custom OpenAI Endpoint: Integration with custom LLM endpoint
- ✅ Memory-Enhanced Chat: Context-aware conversations with long-term memory
- ✅ Single Model Architecture: Simplified, reliable claude-sonnet-4 integration
Production Features
- ✅ FastAPI Backend: RESTful API with comprehensive error handling
- ✅ Docker Compose: Fully containerized deployment with health checks
- ✅ Production Monitoring: Real-time statistics and performance tracking
- ✅ Structured Logging: Correlation IDs and operation timing
- ✅ Performance Analytics: API usage patterns and response time monitoring
Quick Start
-
Prerequisites:
- Docker and Docker Compose
- Custom OpenAI-compatible API endpoint access
- Google Gemini API key for embeddings
-
Environment Setup:
# Copy environment template
cp .env.example .env
# Update .env with your API keys:
OPENAI_COMPAT_API_KEY=sk-your-openai-compatible-key-here
EMBEDDER_API_KEY=AIzaSy-your-google-gemini-key-here
- Deploy Stack:
# Start all services
docker-compose up --build -d
# Verify all services are healthy
curl http://localhost:8000/health
- Access Points:
- API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/health
- Global Statistics: http://localhost:8000/stats
- User Statistics: http://localhost:8000/stats/{user_id}
Architecture
Core Components
- FastAPI Backend: Production-ready API with comprehensive monitoring
- Mem0 OSS: Hybrid memory management (vector + graph + key-value)
- PostgreSQL + pgvector: Vector embeddings storage and similarity search
- Neo4j 5.18: Graph relationships with native vector functions
- Google Gemini: Enterprise-grade embedding generation
Monitoring & Observability
- Request Tracing: Correlation IDs for end-to-end tracking
- Performance Timing: Operation-level latency monitoring
- Usage Analytics: API call patterns and memory statistics
- Error Tracking: Structured error logging with context
- Health Monitoring: Real-time service status checks
API Endpoints
Chat with Memory
POST /chat
- Memory-enhanced conversations with context awareness
Memory Management
POST /memories
- Add new memories from conversationsPOST /memories/search
- Semantic search through stored memoriesGET /memories/{user_id}
- Retrieve user-specific memoriesPUT /memories
- Update existing memoriesDELETE /memories/{memory_id}
- Remove specific memoriesDELETE /memories/user/{user_id}
- Delete all user memories
Graph Operations
GET /graph/relationships/{user_id}
- Graph relationships for user
Monitoring & Analytics
GET /stats
- Global application statisticsGET /stats/{user_id}
- User-specific analytics and metricsGET /health
- Service health and status checkGET /models
- Current model configuration
Testing Examples
1. Health Check
curl http://localhost:8000/health
# Expected: All services show "healthy"
2. Add Memory
curl -X POST http://localhost:8000/memories \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"My name is Alice"}],"user_id":"alice"}'
# Expected: Memory extracted and stored with graph relationships
3. Search Memories
curl -X POST http://localhost:8000/memories/search \
-H "Content-Type: application/json" \
-d '{"query":"Alice","user_id":"alice"}'
# Expected: Returns stored memory with similarity score
4. Memory-Enhanced Chat
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{"message":"What do you remember about me?","user_id":"alice"}'
# Expected: AI recalls stored information about Alice
5. Global Statistics
curl http://localhost:8000/stats
# Expected: Application usage statistics
{
"total_memories": 0,
"total_users": 1,
"api_calls_today": 5,
"avg_response_time_ms": 7106.26,
"memory_operations": {
"add": 1,
"search": 2,
"update": 0,
"delete": 0
},
"uptime_seconds": 137.1
}
6. User Analytics
curl http://localhost:8000/stats/alice
# Expected: User-specific metrics
{
"user_id": "alice",
"memory_count": 2,
"relationship_count": 2,
"last_activity": "2025-08-10T11:01:45.887157+00:00",
"api_calls_today": 1,
"avg_response_time_ms": 23091.93
}
7. Graph Relationships
curl http://localhost:8000/graph/relationships/alice
# Expected: Entity relationships extracted from memories
{
"relationships": [
{
"source": "Alice",
"relationship": "WORKS_AT",
"target": "Google"
}
],
"entities": ["Alice", "Google"],
"user_id": "alice"
}
Troubleshooting
Common Issues
-
Neo4j Vector Function Error
- Problem:
Unknown function 'vector.similarity.cosine'
- Solution: Ensure Neo4j 5.18+ is used (not 5.15)
- Fix: Update docker-compose.yml to use
neo4j:5.18-community
- Problem:
-
Environment Variable Override
- Problem: Shell environment variables override .env file
- Solution: Check
~/.zshrc
or~/.bashrc
for conflicting exports - Fix: Set values directly in docker-compose.yml
-
Model Not Available
- Problem: API returns "Invalid model name"
- Solution: Verify model availability on custom endpoint
- Check:
curl -H "Authorization: Bearer $API_KEY" $BASE_URL/v1/models
-
Ollama Connection Issues
- Problem: Embedding generation fails
- Solution: Ensure Ollama is running with nomic-embed-text model
- Check:
ollama list
should shownomic-embed-text:latest
Service Dependencies
- Neo4j: Must start before backend for vector functions
- PostgreSQL: Required for vector storage initialization
- Ollama: Must be running locally on port 11434
- API Endpoint: Must have valid models available
Production Notes
- Memory Usage: Neo4j and PostgreSQL require adequate RAM for vector operations
- API Rate Limits: Monitor usage on custom endpoint
- Data Persistence: All data stored in Docker volumes
- Scaling: Individual services can be scaled independently
- Security: API keys are passed through environment variables
Development
See individual README files in backend/
and frontend/
directories for development setup.