Back to Blog
April 4, 2024·10 min read

Migrating to Attixa: A Guide for Vector Store Users

MigrationVector StoresIntegrationTutorial

Vector stores are a fundamental part of many modern AI applications, providing efficient similarity search capabilities. This guide shows how to enhance your vector store applications with Attixa's memory capabilities while maintaining compatibility with your existing code.

Why Add Attixa to Your Stack?

Attixa complements existing vector stores with additional capabilities:

  1. Context-Aware Memory: Enhance similarity search with contextual understanding
  2. Rich Metadata: Maintain detailed information about stored items
  3. Adaptive Retrieval: Learn from usage patterns over time
  4. Easy Integration: Compatible with existing vector store code

Integration Guide

Here's how to integrate Attixa with your existing stack:

from attixa import MemorySystem
from attixa.compat import VectorStoreAdapter

# Initialize Attixa with vector store compatibility
memory = MemorySystem()
store_adapter = VectorStoreAdapter(memory)

# Create an index
index = store_adapter.Index("your-index")

# Your existing code continues to work
index.upsert(vectors=[...])
results = index.query(vector=[...])

Compatibility Layer Features

  1. Vector Operations
# Upsert vectors (same as Pinecone)
await index.upsert(
    vectors=[
        {"id": "1", "values": [0.1, 0.2, 0.3], "metadata": {"text": "example"}},
        {"id": "2", "values": [0.4, 0.5, 0.6], "metadata": {"text": "another"}}
    ]
)

# Query vectors (same as Pinecone)
results = await index.query(
    vector=[0.1, 0.2, 0.3],
    top_k=5,
    include_metadata=True
)
  1. Metadata Handling
# Store with metadata (enhanced)
await index.upsert(
    vectors=[{
        "id": "doc1",
        "values": [...],
        "metadata": {
            "text": "content",
            "context": {
                "source": "web",
                "timestamp": "2024-04-18",
                "importance": 0.8
            }
        }
    }]
)
  1. Advanced Queries
# Query with context (Attixa enhancement)
results = await index.query(
    vector=[...],
    context={
        "timeframe": "last week",
        "importance_threshold": 0.7
    }
)

Migration Strategies

  1. Phased Integration
class HybridIndex:
    def __init__(self, vector_store, attixa_index):
        self.vector_store = vector_store
        self.attixa = attixa_index
    
    async def upsert(self, vectors):
        # Write to both during migration
        await self.vector_store.upsert(vectors)
        await self.attixa.upsert(vectors)
    
    async def query(self, vector):
        # Combine results during testing
        store_results = await self.vector_store.query(vector)
        attixa_results = await self.attixa.query(vector)
        return self.merge_results(store_results, attixa_results)
  1. Data Migration
async def migrate_data(vector_store, attixa_index):
    # Fetch all vectors from Pinecone
    vectors = await vector_store.fetch_all()
    
    # Transform and store in Attixa
    for batch in chunk_vectors(vectors, 100):
        await attixa_index.upsert(batch)
  1. Feature Migration
async def migrate_features():
    # Map Pinecone features to Attixa
    feature_map = {
        "namespace": "context",
        "filter": "metadata",
        "include_values": "include_vectors"
    }
    
    # Update queries to use new features
    updated_queries = migrate_queries(existing_queries, feature_map)

Best Practices

  1. Testing

    • Compare query results
    • Verify performance metrics
    • Test edge cases
  2. Monitoring

    • Track migration progress
    • Monitor system performance
    • Alert on discrepancies
  3. Optimization

    • Tune Attixa parameters
    • Optimize batch sizes
    • Configure appropriate thresholds

Real-World Examples

Here are some successful migrations:

Next Steps

Ready to migrate from Pinecone to Attixa? Check out our migration guide or try our compatibility layer.

Allan Livingston

Allan Livingston

Founder of Attixa

Allan is the founder of Attixa and a longtime builder of AI infrastructure and dev tools. He's always dreamed of a better database ever since an intern borrowed his favorite DB systems textbook, read it in the bathroom, and left it on the floor. His obsession with merging database paradigms goes way back to an ill-advised project to unify ODBC and hierarchical text retrieval. That one ended in stack traces and heartbreak. These scars now fuel his mission to build blazing-fast, salience-aware memory for agents.