knowledge-base/TESTING.md
Pratik Narola 7689409950 Initial commit: Production-ready Mem0 interface with monitoring
- Complete Mem0 OSS integration with hybrid datastore
- PostgreSQL + pgvector for vector storage
- Neo4j 5.18 for graph relationships
- Google Gemini embeddings integration
- Comprehensive monitoring with correlation IDs
- Real-time statistics and performance tracking
- Production-grade observability features
- Clean repository with no exposed secrets
2025-08-10 17:34:41 +05:30

513 lines
No EOL
13 KiB
Markdown

# Mem0 Interface POC - Testing Guide
This guide provides comprehensive testing instructions and cURL examples for all API endpoints.
## 🚀 Quick Setup and Testing
### 1. Start the Services
```bash
# Copy and configure environment
cp .env.example .env
# Edit .env with your API keys and settings
# Start all services
docker-compose up -d
# Check services are running
docker-compose ps
```
### 2. Wait for Services to Initialize
```bash
# Check backend health (wait until all services are healthy)
curl -s http://localhost:8000/health | jq
# Check databases are ready
docker-compose logs postgres | grep "ready to accept connections"
docker-compose logs neo4j | grep "Started"
```
### 3. Basic Health Check
```bash
curl -X GET "http://localhost:8000/health" \
-H "Content-Type: application/json" | jq
```
Expected response:
```json
{
"status": "healthy",
"services": {
"openai_endpoint": "healthy",
"memory_o4-mini": "healthy",
"memory_claude-sonnet-4": "healthy",
"memory_gemini-2.5-pro": "healthy",
"memory_o3": "healthy"
},
"timestamp": "2024-12-28T10:30:00.000Z"
}
```
## 📋 Complete API Testing
### 1. Model Information
```bash
# Get available models and routing configuration
curl -X GET "http://localhost:8000/models" | jq
```
Expected response:
```json
{
"available_models": {
"fast": "o4-mini",
"analytical": "gemini-2.5-pro",
"reasoning": "claude-sonnet-4",
"expert": "o3",
"extraction": "o4-mini"
},
"model_routing": {
"simple": "o4-mini",
"moderate": "gemini-2.5-pro",
"complex": "claude-sonnet-4",
"expert": "o3"
}
}
```
### 2. Enhanced Chat with Memory
#### Simple Chat (should route to o4-mini)
```bash
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Hello, my name is Alice and I live in San Francisco",
"user_id": "alice_test",
"enable_graph": true
}' | jq
```
#### Complex Chat (should route to claude-sonnet-4)
```bash
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Can you help me design a comprehensive architecture for a distributed microservices system that needs to handle high throughput?",
"user_id": "alice_test",
"enable_graph": true
}' | jq
```
#### Expert Chat (should route to o3)
```bash
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "I need to research and optimize a complex machine learning pipeline for real-time fraud detection with comprehensive evaluation metrics",
"user_id": "alice_test",
"enable_graph": true
}' | jq
```
#### Chat with Context
```bash
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "What did I tell you about my location?",
"user_id": "alice_test",
"context": [
{"role": "user", "content": "I need help with travel planning"},
{"role": "assistant", "content": "I'd be happy to help with travel planning!"}
],
"enable_graph": true
}' | jq
```
#### Force Specific Model
```bash
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Simple question: what is 2+2?",
"user_id": "alice_test",
"force_model": "o3",
"enable_graph": true
}' | jq
```
Expected chat response structure:
```json
{
"response": "Hello Alice! Nice to meet you. I've noted that you're located in San Francisco...",
"model_used": "o4-mini",
"complexity": "simple",
"memories_used": 0,
"estimated_tokens": 45,
"task_metrics": {
"complexity": "simple",
"estimated_tokens": 45,
"requires_memory": false,
"is_time_sensitive": false,
"context_length": 0
}
}
```
### 3. Memory Management
#### Add Memories Manually
```bash
curl -X POST "http://localhost:8000/memories" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "I work as a software engineer at Google"},
{"role": "assistant", "content": "That's great! What kind of projects do you work on?"},
{"role": "user", "content": "I focus on machine learning infrastructure"}
],
"user_id": "alice_test",
"metadata": {"topic": "career", "importance": "high"},
"enable_graph": true
}' | jq
```
#### Search Memories
```bash
curl -X POST "http://localhost:8000/memories/search" \
-H "Content-Type: application/json" \
-d '{
"query": "Where does Alice work?",
"user_id": "alice_test",
"limit": 5
}' | jq
```
#### Get All User Memories
```bash
curl -X GET "http://localhost:8000/memories/alice_test?limit=10" | jq
```
#### Update Memory (you'll need a real memory_id from previous responses)
```bash
# First get memories to find an ID
MEMORY_ID=$(curl -s -X GET "http://localhost:8000/memories/alice_test?limit=1" | jq -r '.[0].id')
curl -X PUT "http://localhost:8000/memories" \
-H "Content-Type: application/json" \
-d '{
"memory_id": "'$MEMORY_ID'",
"content": "Alice works as a senior software engineer at Google, specializing in ML infrastructure",
"metadata": {"topic": "career", "importance": "high", "updated": true}
}' | jq
```
#### Delete Specific Memory
```bash
curl -X DELETE "http://localhost:8000/memories/$MEMORY_ID" | jq
```
#### Delete All User Memories
```bash
curl -X DELETE "http://localhost:8000/memories/user/alice_test" | jq
```
### 4. Graph Relationships
```bash
# Get graph relationships for a user
curl -X GET "http://localhost:8000/graph/relationships/alice_test" | jq
```
Expected graph response:
```json
{
"relationships": [
{
"source": "Alice",
"relationship": "WORKS_AT",
"target": "Google",
"properties": {}
},
{
"source": "Alice",
"relationship": "LIVES_IN",
"target": "San Francisco",
"properties": {}
}
],
"entities": ["Alice", "Google", "San Francisco"],
"user_id": "alice_test"
}
```
## 🧪 Test Scenarios
### Scenario 1: User Onboarding and Profile Building
```bash
# Step 1: Initial introduction
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Hi, I'\''m Bob. I'\''m a data scientist at Microsoft in Seattle. I love hiking and photography.",
"user_id": "bob_test"
}' | jq
# Step 2: Add work preferences
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "I prefer working with Python and PyTorch for my machine learning projects.",
"user_id": "bob_test"
}' | jq
# Step 3: Test memory recall
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "What programming languages do I prefer?",
"user_id": "bob_test"
}' | jq
# Step 4: Check stored memories
curl -X GET "http://localhost:8000/memories/bob_test" | jq
# Step 5: View relationships
curl -X GET "http://localhost:8000/graph/relationships/bob_test" | jq
```
### Scenario 2: Multi-User Isolation Testing
```bash
# Create memories for User 1
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "My favorite food is pizza",
"user_id": "user1"
}' | jq
# Create memories for User 2
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "My favorite food is sushi",
"user_id": "user2"
}' | jq
# Test isolation - User 1 should only see their own memories
curl -X POST "http://localhost:8000/memories/search" \
-H "Content-Type: application/json" \
-d '{
"query": "favorite food",
"user_id": "user1"
}' | jq
# Test isolation - User 2 should only see their own memories
curl -X POST "http://localhost:8000/memories/search" \
-H "Content-Type: application/json" \
-d '{
"query": "favorite food",
"user_id": "user2"
}' | jq
```
### Scenario 3: Memory Evolution and Conflict Resolution
```bash
# Initial preference
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "I really dislike coffee, I prefer tea",
"user_id": "charlie_test"
}' | jq
# Changed preference (should update memory)
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Actually, I'\''ve started to really enjoy coffee now, especially espresso",
"user_id": "charlie_test"
}' | jq
# Test current preference
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "What do I think about coffee?",
"user_id": "charlie_test"
}' | jq
# Check memory evolution
curl -X GET "http://localhost:8000/memories/charlie_test" | jq
```
### Scenario 4: Model Routing Validation
```bash
# Simple task (should use o4-mini)
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "What is the capital of France?",
"user_id": "routing_test"
}' | jq '.model_used'
# Analytical task (should use gemini-2.5-pro)
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Can you analyze the pros and cons of microservices vs monolithic architecture?",
"user_id": "routing_test"
}' | jq '.model_used'
# Complex reasoning (should use claude-sonnet-4)
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Help me design a strategy for implementing a new software development process across multiple teams",
"user_id": "routing_test"
}' | jq '.model_used'
# Expert task (should use o3)
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "I need to research and optimize a comprehensive distributed system architecture with multiple databases, caching layers, and real-time processing requirements",
"user_id": "routing_test"
}' | jq '.model_used'
```
## 🔍 Monitoring and Debugging
### Check Service Logs
```bash
# Backend logs
docker-compose logs -f backend
# Database logs
docker-compose logs -f postgres
docker-compose logs -f neo4j
# All logs
docker-compose logs -f
```
### Database Direct Access
```bash
# PostgreSQL
docker-compose exec postgres psql -U mem0_user -d mem0_db
# Check tables
\dt
# Check embeddings
SELECT id, user_id, content, created_at FROM embeddings LIMIT 5;
# Neo4j Browser
# Open http://localhost:7474 in browser
# Username: neo4j, Password: mem0_neo4j_password
# Check nodes and relationships
MATCH (n) RETURN n LIMIT 10;
MATCH ()-[r]->() RETURN r LIMIT 10;
```
### Performance Testing
```bash
# Simple load test with curl
for i in {1..10}; do
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Test message '$i'",
"user_id": "load_test_user"
}' &
done
wait
# Check response times
time curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "What is machine learning?",
"user_id": "perf_test"
}'
```
## ✅ Expected Results Checklist
After running the tests, verify:
- [ ] Health check shows all services healthy
- [ ] Chat responses are generated using appropriate models
- [ ] Memories are stored and retrievable
- [ ] Memory search returns relevant results
- [ ] Graph relationships are created and accessible
- [ ] User isolation works correctly
- [ ] Memory updates and deletions work
- [ ] Model routing works as expected
- [ ] No errors in service logs
- [ ] Database connections are stable
## 🐛 Troubleshooting
### Common Issues
1. **"No memory instance available"**
- Check if databases are running: `docker-compose ps`
- Verify environment variables in `.env`
- Check backend logs: `docker-compose logs backend`
2. **OpenAI endpoint errors**
- Verify `OPENAI_API_KEY` and `OPENAI_BASE_URL` in `.env`
- Test endpoint directly: `curl -H "Authorization: Bearer $OPENAI_API_KEY" $OPENAI_BASE_URL/models`
3. **Memory search returns empty results**
- Ensure memories were added first
- Check user_id matches between add and search
- Verify pgvector extension: `docker-compose exec postgres psql -U mem0_user -d mem0_db -c "\dx"`
4. **Graph relationships not appearing**
- Check if `enable_graph: true` is set
- Verify Neo4j is running with APOC: `docker-compose logs neo4j | grep -i apoc`
- Check Neo4j connectivity: open http://localhost:7474
### Reset Everything
```bash
# Stop all services
docker-compose down -v
# Remove all data
docker volume prune -f
# Restart fresh
docker-compose up -d
```
## 📊 Performance Expectations
With optimal configuration:
- Health check: < 100ms
- Simple chat: < 2s (depends on o4-mini speed)
- Complex chat: < 10s (depends on model)
- Memory search: < 500ms
- Memory add: < 1s
- Graph queries: < 1s
Performance will vary based on:
- Custom endpoint response times
- Database hardware/configuration
- Network latency
- Query complexity