
- Complete Mem0 OSS integration with hybrid datastore - PostgreSQL + pgvector for vector storage - Neo4j 5.18 for graph relationships - Google Gemini embeddings integration - Comprehensive monitoring with correlation IDs - Real-time statistics and performance tracking - Production-grade observability features - Clean repository with no exposed secrets
403 lines
No EOL
14 KiB
Markdown
403 lines
No EOL
14 KiB
Markdown
# Mem0 Open Source - Graph Memory Overview
|
|
|
|
## Introduction
|
|
|
|
Mem0 now supports **Graph Memory** capabilities that enable users to create and utilize complex relationships between pieces of information, allowing for more nuanced and context-aware responses. This integration combines the strengths of both vector-based and graph-based approaches, resulting in more accurate and comprehensive information retrieval and generation.
|
|
|
|
**NodeSDK now supports Graph Memory** 🎉
|
|
|
|
## Installation
|
|
|
|
To use Mem0 with Graph Memory support, install it using pip:
|
|
|
|
### Python
|
|
```bash
|
|
pip install "mem0ai[graph]"
|
|
```
|
|
|
|
### TypeScript
|
|
The NodeSDK includes graph memory support in the standard installation.
|
|
|
|
This command installs Mem0 along with the necessary dependencies for graph functionality.
|
|
|
|
**Try Graph Memory on Google Colab**: [](https://colab.research.google.com/drive/1PfIGVHnliIlG2v8cx0g45TF0US-jRPZ1?usp=sharing)
|
|
|
|
**Demo Video**: [Dynamic Graph Memory by Mem0 - YouTube](https://www.youtube.com/watch?v=u_ZAqNNVtXA)
|
|
|
|
## Initialize Graph Memory
|
|
|
|
To initialize Graph Memory, you'll need to set up your configuration with graph store providers. Currently, we support **Neo4j**, **Memgraph**, and **Neptune Analytics** as graph store providers.
|
|
|
|
### Initialize Neo4j
|
|
|
|
You can setup [Neo4j](https://neo4j.com/) locally or use the hosted [Neo4j AuraDB](https://neo4j.com/product/auradb/).
|
|
|
|
**Important**: If you are using Neo4j locally, you need to install [APOC plugins](https://neo4j.com/labs/apoc/4.1/installation/).
|
|
|
|
#### LLM Configuration Options
|
|
|
|
Users can customize the LLM for Graph Memory from the [Supported LLM list](https://docs.mem0.ai/components/llms/overview) with three levels of configuration:
|
|
|
|
1. **Main Configuration**: If `llm` is set in the main config, it will be used for all graph operations.
|
|
2. **Graph Store Configuration**: If `llm` is set in the graph_store config, it will override the main config `llm` and be used specifically for graph operations.
|
|
3. **Default Configuration**: If no custom LLM is set, the default LLM (`gpt-4o-2024-08-06`) will be used for all graph operations.
|
|
|
|
#### Python Configuration
|
|
```python
|
|
from mem0 import Memory
|
|
|
|
config = {
|
|
"graph_store": {
|
|
"provider": "neo4j",
|
|
"config": {
|
|
"url": "neo4j+s://xxx",
|
|
"username": "neo4j",
|
|
"password": "xxx"
|
|
}
|
|
}
|
|
}
|
|
|
|
m = Memory.from_config(config_dict=config)
|
|
```
|
|
|
|
#### TypeScript Configuration
|
|
If you are using NodeSDK, you need to pass `enableGraph` as `true` in the `config` object.
|
|
|
|
```typescript
|
|
import { Memory } from "mem0ai/oss";
|
|
|
|
const config = {
|
|
enableGraph: true,
|
|
graphStore: {
|
|
provider: "neo4j",
|
|
config: {
|
|
url: "neo4j+s://xxx",
|
|
username: "neo4j",
|
|
password: "xxx",
|
|
}
|
|
}
|
|
}
|
|
|
|
const memory = new Memory(config);
|
|
```
|
|
|
|
### Initialize Memgraph
|
|
|
|
Run Memgraph with Docker:
|
|
|
|
```bash
|
|
docker run -p 7687:7687 memgraph/memgraph-mage:latest --schema-info-enabled=True
|
|
```
|
|
|
|
The `--schema-info-enabled` flag is set to `True` for more performant schema generation.
|
|
|
|
**Additional Information**: [Memgraph documentation](https://memgraph.com/docs)
|
|
|
|
#### Python Configuration
|
|
```python
|
|
from mem0 import Memory
|
|
|
|
config = {
|
|
"graph_store": {
|
|
"provider": "memgraph",
|
|
"config": {
|
|
"url": "bolt://localhost:7687",
|
|
"username": "memgraph",
|
|
"password": "xxx",
|
|
},
|
|
},
|
|
}
|
|
|
|
m = Memory.from_config(config_dict=config)
|
|
```
|
|
|
|
### Initialize Neptune Analytics
|
|
|
|
Mem0 now supports Amazon Neptune Analytics as a graph store provider. This integration allows you to use Neptune Analytics for storing and querying graph-based memories.
|
|
|
|
#### Instance Setup
|
|
|
|
1. Create an Amazon Neptune Analytics instance in your AWS account following the [AWS documentation](https://docs.aws.amazon.com/neptune-analytics/latest/userguide/get-started.html).
|
|
|
|
2. **Important Considerations**:
|
|
- Public connectivity is not enabled by default, and if accessing from outside a VPC, it needs to be enabled.
|
|
- Once the Amazon Neptune Analytics instance is available, you will need the graph-identifier to connect.
|
|
- The Neptune Analytics instance must be created using the same vector dimensions as the embedding model creates. See: [Vector Index Documentation](https://docs.aws.amazon.com/neptune-analytics/latest/userguide/vector-index.html)
|
|
|
|
#### Attach Credentials
|
|
|
|
Configure your AWS credentials with access to your Amazon Neptune Analytics resources by following the [Configuration and credentials precedence](https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-configure.html#configure-precedence).
|
|
|
|
**Environment Variables Example**:
|
|
```bash
|
|
export AWS_ACCESS_KEY_ID=your-access-key
|
|
export AWS_SECRET_ACCESS_KEY=your-secret-key
|
|
export AWS_SESSION_TOKEN=your-session-token
|
|
export AWS_DEFAULT_REGION=your-region
|
|
```
|
|
|
|
**Required IAM Permissions**: The IAM user or role making the request must have a policy attached that allows one of the following IAM actions in that neptune-graph:
|
|
- neptune-graph:ReadDataViaQuery
|
|
- neptune-graph:WriteDataViaQuery
|
|
- neptune-graph:DeleteDataViaQuery
|
|
|
|
#### Usage
|
|
|
|
```python
|
|
from mem0 import Memory
|
|
|
|
# This example must connect to a neptune-graph instance with 1536 vector dimensions specified.
|
|
config = {
|
|
"embedder": {
|
|
"provider": "openai",
|
|
"config": {"model": "text-embedding-3-large", "embedding_dims": 1536},
|
|
},
|
|
"graph_store": {
|
|
"provider": "neptune",
|
|
"config": {
|
|
"endpoint": "neptune-graph://<GRAPH_ID>",
|
|
},
|
|
},
|
|
}
|
|
|
|
m = Memory.from_config(config_dict=config)
|
|
```
|
|
|
|
#### Troubleshooting
|
|
|
|
- **Connection Issues**: Refer to the [Connecting to a graph guide](https://docs.aws.amazon.com/neptune-analytics/latest/userguide/gettingStarted-connecting.html)
|
|
- **Authentication Issues**: Refer to the [boto3 client configuration options](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html)
|
|
- **Detailed Examples**: See the [Neptune Analytics example notebook](https://docs.mem0.ai/open-source/graph_memory/examples/graph-db-demo/neptune-analytics-example.ipynb)
|
|
|
|
## Graph Operations
|
|
|
|
Mem0's graph supports the following core operations:
|
|
|
|
### Add Memories
|
|
|
|
Mem0 with Graph Memory supports both "user_id" and "agent_id" parameters. You can use either or both to organize your memories.
|
|
|
|
#### Python
|
|
```python
|
|
# Using only user_id
|
|
m.add("I like pizza", user_id="alice")
|
|
|
|
# Using both user_id and agent_id
|
|
m.add("I like pizza", user_id="alice", agent_id="food-assistant")
|
|
```
|
|
|
|
#### TypeScript
|
|
```typescript
|
|
// Using only userId
|
|
await memory.add("I like pizza", { userId: "alice" });
|
|
|
|
// Using both userId and agentId
|
|
await memory.add("I like pizza", { userId: "alice", agentId: "food-assistant" });
|
|
```
|
|
|
|
### Get All Memories
|
|
|
|
#### Python
|
|
```python
|
|
# Get all memories for a user
|
|
m.get_all(user_id="alice")
|
|
|
|
# Get all memories for a specific agent belonging to a user
|
|
m.get_all(user_id="alice", agent_id="food-assistant")
|
|
```
|
|
|
|
#### TypeScript
|
|
```typescript
|
|
// Get all memories for a user
|
|
await memory.getAll({ userId: "alice" });
|
|
|
|
// Get all memories for a specific agent belonging to a user
|
|
await memory.getAll({ userId: "alice", agentId: "food-assistant" });
|
|
```
|
|
|
|
### Search Memories
|
|
|
|
#### Python
|
|
```python
|
|
# Search memories for a user
|
|
m.search("tell me my name.", user_id="alice")
|
|
|
|
# Search memories for a specific agent belonging to a user
|
|
m.search("tell me my name.", user_id="alice", agent_id="food-assistant")
|
|
```
|
|
|
|
#### TypeScript
|
|
```typescript
|
|
// Search memories for a user
|
|
await memory.search("tell me my name.", { userId: "alice" });
|
|
|
|
// Search memories for a specific agent belonging to a user
|
|
await memory.search("tell me my name.", { userId: "alice", agentId: "food-assistant" });
|
|
```
|
|
|
|
### Delete All Memories
|
|
|
|
#### Python
|
|
```python
|
|
# Delete all memories for a user
|
|
m.delete_all(user_id="alice")
|
|
|
|
# Delete all memories for a specific agent belonging to a user
|
|
m.delete_all(user_id="alice", agent_id="food-assistant")
|
|
```
|
|
|
|
#### TypeScript
|
|
```typescript
|
|
// Delete all memories for a user
|
|
await memory.deleteAll({ userId: "alice" });
|
|
|
|
// Delete all memories for a specific agent belonging to a user
|
|
await memory.deleteAll({ userId: "alice", agentId: "food-assistant" });
|
|
```
|
|
|
|
## Example Usage
|
|
|
|
Here's a comprehensive example of how to use Mem0's graph operations:
|
|
|
|
1. **First**, we'll add some memories for a user named Alice.
|
|
2. **Then**, we'll visualize how the graph evolves as we add more memories.
|
|
3. **You'll see** how entities and relationships are automatically extracted and connected in the graph.
|
|
|
|
### Step-by-Step Memory Addition
|
|
|
|
#### 1. Add memory 'I like going to hikes'
|
|
```python
|
|
m.add("I like going to hikes", user_id="alice123")
|
|
```
|
|
**Result**: Creates initial user node and hiking preference relationship.
|
|
|
|
#### 2. Add memory 'I love to play badminton'
|
|
```python
|
|
m.add("I love to play badminton", user_id="alice123")
|
|
```
|
|
**Result**: Adds badminton activity and positive relationship.
|
|
|
|
#### 3. Add memory 'I hate playing badminton'
|
|
```python
|
|
m.add("I hate playing badminton", user_id="alice123")
|
|
```
|
|
**Result**: Updates existing badminton relationship, showing preference conflict resolution.
|
|
|
|
#### 4. Add memory 'My friend name is john and john has a dog named tommy'
|
|
```python
|
|
m.add("My friend name is john and john has a dog named tommy", user_id="alice123")
|
|
```
|
|
**Result**: Creates complex relationship network: Alice -> friends with -> John -> owns -> Tommy (dog).
|
|
|
|
#### 5. Add memory 'My name is Alice'
|
|
```python
|
|
m.add("My name is Alice", user_id="alice123")
|
|
```
|
|
**Result**: Adds identity information to the user node.
|
|
|
|
#### 6. Add memory 'John loves to hike and Harry loves to hike as well'
|
|
```python
|
|
m.add("John loves to hike and Harry loves to hike as well", user_id="alice123")
|
|
```
|
|
**Result**: Creates connections between John, Harry, and hiking activities, potentially connecting with Alice's hiking preference.
|
|
|
|
#### 7. Add memory 'My friend peter is the spiderman'
|
|
```python
|
|
m.add("My friend peter is the spiderman", user_id="alice123")
|
|
```
|
|
**Result**: Adds another friend relationship with identity/role information.
|
|
|
|
### Search Examples
|
|
|
|
#### Search for Identity
|
|
```python
|
|
result = m.search("What is my name?", user_id="alice123")
|
|
```
|
|
**Expected Response**: Returns Alice's name and related identity information from the graph.
|
|
|
|
#### Search for Relationships
|
|
```python
|
|
result = m.search("Who is spiderman?", user_id="alice123")
|
|
```
|
|
**Expected Response**: Returns Peter and his spiderman identity, along with his friendship relationship to Alice.
|
|
|
|
## Using Multiple Agents with Graph Memory
|
|
|
|
When working with multiple agents, you can use the "agent_id" parameter to organize memories by both user and agent. This allows you to:
|
|
|
|
1. **Create agent-specific knowledge graphs**
|
|
2. **Share common knowledge between agents**
|
|
3. **Isolate sensitive or specialized information to specific agents**
|
|
|
|
### Example: Multi-Agent Setup
|
|
|
|
```python
|
|
# Add memories for different agents
|
|
m.add("I prefer Italian cuisine", user_id="bob", agent_id="food-assistant")
|
|
m.add("I'm allergic to peanuts", user_id="bob", agent_id="health-assistant")
|
|
m.add("I live in Seattle", user_id="bob") # Shared across all agents
|
|
|
|
# Search within specific agent context
|
|
food_preferences = m.search("What food do I like?", user_id="bob", agent_id="food-assistant")
|
|
health_info = m.search("What are my allergies?", user_id="bob", agent_id="health-assistant")
|
|
location = m.search("Where do I live?", user_id="bob") # Searches across all agents
|
|
```
|
|
|
|
### Agent Isolation Benefits
|
|
|
|
1. **Privacy**: Sensitive health information stays with health-related agents
|
|
2. **Specialization**: Each agent builds domain-specific knowledge
|
|
3. **Shared Context**: Common information (like location) remains accessible to all agents
|
|
4. **Scalability**: Easy to add new agents without disrupting existing knowledge
|
|
|
|
## Key Features
|
|
|
|
### Automatic Entity Extraction
|
|
- **Smart Recognition**: Automatically identifies people, places, objects, and concepts
|
|
- **Relationship Mapping**: Creates meaningful connections between entities
|
|
- **Context Preservation**: Maintains semantic relationships in the graph
|
|
|
|
### Dynamic Graph Evolution
|
|
- **Real-time Updates**: Graph structure evolves as new memories are added
|
|
- **Conflict Resolution**: Handles contradictory information intelligently
|
|
- **Relationship Strengthening**: Reinforces connections through repeated mentions
|
|
|
|
### Intelligent Querying
|
|
- **Contextual Search**: Searches consider relationship context, not just semantic similarity
|
|
- **Graph Traversal**: Finds information through relationship paths
|
|
- **Multi-hop Reasoning**: Can answer questions requiring connection of multiple entities
|
|
|
|
### Hybrid Architecture
|
|
- **Vector + Graph**: Combines semantic search with relationship reasoning
|
|
- **Dual Storage**: Information stored in both vector and graph formats
|
|
- **Unified Interface**: Single API for both vector and graph operations
|
|
|
|
## Important Notes
|
|
|
|
> **Note**: The Graph Memory implementation is not standalone. You will be adding/retrieving memories to the vector store and the graph store simultaneously.
|
|
|
|
This hybrid approach ensures:
|
|
- **Semantic Search**: Vector storage enables similarity-based retrieval
|
|
- **Relationship Reasoning**: Graph storage enables connection-based queries
|
|
- **Comprehensive Results**: Queries leverage both approaches for better accuracy
|
|
|
|
## Getting Help
|
|
|
|
If you want to use a managed version of Mem0, please check out [Mem0 Platform](https://mem0.dev/pd). If you have any questions, please feel free to reach out to us using one of the following methods:
|
|
|
|
- **[Discord](https://mem0.dev/DiD)**: Join our community
|
|
- **[GitHub](https://github.com/mem0ai/mem0/discussions/new?category=q-a)**: Ask questions on GitHub
|
|
- **[Support](https://cal.com/taranjeetio/meet)**: Talk to founders
|
|
|
|
## Conclusion
|
|
|
|
Graph Memory in Mem0 represents a significant advancement in AI memory capabilities, enabling more sophisticated reasoning and context-aware responses. By combining the semantic understanding of vector databases with the relationship intelligence of graph databases, Mem0 provides a comprehensive solution for building truly intelligent AI systems.
|
|
|
|
The ability to automatically extract entities, create relationships, and reason across connections makes Graph Memory particularly powerful for:
|
|
|
|
- **Personal AI Assistants** that understand complex personal relationships
|
|
- **Customer Support Systems** that maintain comprehensive user context
|
|
- **Knowledge Management Platforms** that connect related information
|
|
- **Multi-Agent Systems** that share and specialize knowledge appropriately
|
|
|
|
With support for multiple graph database backends and both Python and TypeScript SDKs, Graph Memory provides the flexibility and scalability needed for production AI applications. |