Back to Blog
April 10, 2024·10 min read

Getting Started with Attixa: A Practical Guide

TutorialGetting StartedIntegrationMemory Systems

Getting Started with Attixa: A Practical Guide

Attixa is designed to be simple to integrate while providing powerful memory capabilities for your AI applications. This guide will walk you through the process of setting up and using Attixa in your projects.

Installation

Getting started with Attixa is straightforward:

pip install attixa

Basic Usage

Here's a simple example of how to use Attixa in your application:

from attixa import Memory

# Initialize the memory system
memory = Memory("./data")

# Store some information
memory.store(
    content="The customer reported an issue with the login system",
    context={
        "priority": "high",
        "category": "authentication",
        "timestamp": "2024-04-10T10:30:00Z"
    }
)

# Query the memory
results = memory.query(
    "Recent authentication problems",
    context={"timeframe": "last 24 hours"}
)

Key Features in Action

1. Contextual Recall

Attixa's contextual recall allows you to find information based on relevance, not just similarity:

# Store a technical document
memory.store(
    content="The API requires authentication using JWT tokens",
    context={
        "document_type": "technical",
        "section": "authentication",
        "version": "2.0"
    }
)

# Query with context
results = memory.query(
    "How do I authenticate API requests?",
    context={"document_type": "technical"}
)

2. Deterministic + Semantic Search

Combine structured filters with semantic understanding:

# Store user feedback
memory.store(
    content="The dashboard is too slow to load",
    context={
        "feedback_type": "performance",
        "component": "dashboard",
        "severity": "high"
    }
)

# Query with both semantic and structured filters
results = memory.query(
    "Performance issues in the dashboard",
    filters={
        "component": "dashboard",
        "severity": "high"
    }
)

3. No Hallucinations

Every result is grounded in your actual data:

# Store product information
memory.store(
    content="Product X supports up to 1000 concurrent users",
    context={
        "source": "documentation",
        "version": "1.2",
        "verified": True
    }
)

# Query with source verification
results = memory.query(
    "What are the limits of Product X?",
    context={"require_verification": True}
)

Advanced Usage

Custom Salience Scoring

You can customize how Attixa determines what's important:

from attixa import Memory, SalienceConfig

# Configure custom salience
config = SalienceConfig(
    time_decay=0.95,  # How quickly importance decays over time
    context_weight=0.7,  # How much context matters
    content_weight=0.3   # How much content matters
)

memory = Memory("./data", config=config)

Real-time Updates

Attixa handles streaming data efficiently:

# Stream data to Attixa
for event in event_stream:
    memory.store(
        content=event.data,
        context={
            "timestamp": event.timestamp,
            "source": event.source
        }
    )
    
    # Query in real-time
    results = memory.query(
        "Current system status",
        context={"timeframe": "realtime"}
    )

Best Practices

  1. Context Matters: Always provide relevant context when storing information
  2. Structured Metadata: Use consistent metadata fields for better filtering
  3. Regular Updates: Keep your memory system current with regular updates
  4. Verification: Use the verification features for critical information

Next Steps

Ready to integrate Attixa into your application? Check out our documentation for more detailed information and advanced features. You can also try the interactive demo to see Attixa in action.

Need help? Our team is available to assist with integration and optimization. Contact us for personalized support.

Allan Livingston

Allan Livingston

Founder of Attixa

Allan is the founder of Attixa and a longtime builder of AI infrastructure and dev tools. He's always dreamed of a better database ever since an intern borrowed his favorite DB systems textbook, read it in the bathroom, and left it on the floor. His obsession with merging database paradigms goes way back to an ill-advised project to unify ODBC and hierarchical text retrieval. That one ended in stack traces and heartbreak. These scars now fuel his mission to build blazing-fast, salience-aware memory for agents.