Skip to main content

MCP Integration

The Memory Module uses the Model Context Protocol (MCP) to work seamlessly with 29+ AI assistants. Configure once, use everywhere.
Already done setup? Jump to Using MCP Tools to see how to use memory in your AI conversations.

What is MCP?

Model Context Protocol (MCP) is an open standard that lets AI assistants connect to external services like the Memory Module. Why it matters: Instead of building separate integrations for Claude Desktop, Continue, Cursor, Zed, etc., we implement MCP once and it works everywhere. For you: One configuration, works across all your AI tools. Same memory system, same context, everywhere you work.

Supported AI Assistants

The Memory Module works with any MCP-compatible client: Full list: MCP Clients Directory

Quick Setup

1

Get Your MCP Credentials

  1. Log into app.ulpi.io
  2. Navigate to your repository
  3. Go to Settings → MCP Integration
  4. Click Copy MCP Configuration
You’ll get a JSON config like this:
{
  "url": "https://api.ulpi.io/mcp/memory",
  "headers": {
    "Authorization": "Bearer ulpi_xxx....",
    "X-Tenant-ID": "your-repo-id"
  },
  "transport": "sse"
}
2

Add to Your AI Assistant

Choose your AI assistant below for platform-specific instructions:
3

Verify Connection

Open your AI assistant and ask:
What MCP tools do you have access to?
You should see:
  • store-memory
  • search-memories
  • retrieve-memory
  • reinforce-memory
  • prune-memories
  • delete-memory
Plus 4 resources: memory://list, memory://stats, memory://recent, memory://
4

Store Your First Memory

Try it out:
Store a memory: "Our API uses JWT bearer token authentication.
Tokens expire after 30 days. Include in header:
Authorization: Bearer {token}"

Tags: api, authentication, documentation
Sector: semantic
Your AI assistant will use the MCP tool automatically!

Platform-Specific Setup

Claude Desktop

Supported: macOS, Windows, Linux
1

Locate Config File

macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Windows:
%APPDATA%\Claude\claude_desktop_config.json
Linux:
~/.config/Claude/claude_desktop_config.json
Create the file if it doesn’t exist: {}
2

Add MCP Server

Edit the config file and add:
{
  "mcpServers": {
    "ulpi-memory": {
      "url": "https://api.ulpi.io/mcp/memory",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY",
        "X-Tenant-ID": "your-repo-id"
      },
      "transport": "sse"
    }
  }
}
Replace YOUR_API_KEY and your-repo-id with your actual credentials from Step 1.
3

Restart Claude Desktop

Completely quit (Cmd+Q / Alt+F4) and reopen Claude Desktop.You should see: “Connected to 1 MCP server” notification
4

Test It

In a new conversation:
What memory tools do you have?
Claude should list the 6 memory tools.

Continue.dev

Supported: VS Code, JetBrains IDEs (IntelliJ, PyCharm, WebStorm, etc.)
1

Locate Config File

Both VS Code and JetBrains:
~/.continue/config.json
Or use the GUI: Continue sidebar → Gear icon → Edit config.json
2

Add MCP Server

Add under experimentalmodelContextProtocolServers:
{
  "experimental": {
    "modelContextProtocolServers": [
      {
        "name": "ulpi-memory",
        "url": "https://api.ulpi.io/mcp/memory",
        "headers": {
          "Authorization": "Bearer YOUR_API_KEY",
          "X-Tenant-ID": "your-repo-id"
        },
        "transport": "sse"
      }
    ]
  }
}
3

Reload

VS Code: Cmd+Shift+P → “Developer: Reload Window”JetBrains: Restart IDE
4

Verify

In Continue chat:
What memory tools are available?

Cursor

Supported: macOS, Windows, Linux
1

Open Cursor Settings

Settings (Cmd+, / Ctrl+,) → Extensions → MCP Servers
2

Add MCP Server

Click Add MCP Server:
  • Name: ulpi-memory
  • URL: https://api.ulpi.io/mcp/memory
  • Transport: SSE (Server-Sent Events)
  • Headers:
{
  "Authorization": "Bearer YOUR_API_KEY",
  "X-Tenant-ID": "your-repo-id"
}
Click Save.
3

Restart Cursor

Close and reopen Cursor IDE
4

Verify

Cursor AI chat (Cmd+L):
What MCP servers are connected?

Cline

Supported: VS Code extension
1

Install Cline

If not already: VS Code → Extensions → Search “Cline” → Install
2

Open Cline Settings

Cline sidebar → Gear icon ⚙️ → MCP Servers
3

Add Configuration

Create/edit ~/.cline/mcp_servers.json:
{
  "servers": {
    "ulpi-memory": {
      "url": "https://api.ulpi.io/mcp/memory",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY",
        "X-Tenant-ID": "your-repo-id"
      },
      "transport": "sse"
    }
  }
}
4

Reload

Cmd+Shift+P → “Developer: Reload Window”
5

Verify

Cline chat:
Show available MCP tools

Zed

Supported: macOS, Linux
1

Locate Config File

macOS/Linux:
~/.config/zed/settings.json
Or use GUI: Cmd+, → Edit settings.json
2

Add MCP Server

{
  "mcp_servers": {
    "ulpi-memory": {
      "url": "https://api.ulpi.io/mcp/memory",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY",
        "X-Tenant-ID": "your-repo-id"
      },
      "transport": "sse"
    }
  }
}
3

Restart Zed

Quit (Cmd+Q) and reopen
4

Verify

Zed AI (Cmd+Shift+A):
What MCP tools are available?

Other MCP Clients

For any MCP-compatible client, use this configuration template:
{
  "mcpServers": {
    "ulpi-memory": {
      "url": "https://api.ulpi.io/mcp/memory",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY",
        "X-Tenant-ID": "your-repo-id"
      },
      "transport": "sse"
    }
  }
}
Configuration Fields:
  • url (required): MCP server endpoint
  • headers (required): Authentication (API key + tenant ID)
  • transport (required): sse for Server-Sent Events
Consult your MCP client’s documentation for specific file locations and syntax.

Using MCP Tools

Once configured, your AI assistant can use 6 memory tools naturally:

1. store-memory

Purpose: Save new memories Usage:
Store a memory: "React Server Components allow async data
fetching directly in components, eliminating client-side
data fetching waterfalls. This reduces bundle size and
improves initial page load time."

Tags: react, server-components, performance
Sector: semantic
Parameters:
  • content (required): The information to store
  • sector (optional): episodic, semantic, procedural, emotional, reflective
  • tags (optional): Array of tags for organization
  • source (optional): Where this came from
  • metadata (optional): Custom JSON object
Returns: Memory ID, sector, initial salience, embedding status

2. search-memories

Purpose: Find memories by semantic meaning Usage:
Search my memories for "API authentication methods"
With filters:
Search semantic memories for "database optimization" with waypoint expansion
Parameters:
  • query (required): Search text
  • limit (optional): Number of results (default: 10, max: 100)
  • sector (optional): Filter by cognitive sector
  • min_salience (optional): Minimum salience threshold
  • expand_waypoints (optional): Enable context expansion (default: true)
  • max_hops (optional): Waypoint traversal depth (default: 3)
Returns: Ranked list of memories with relevance scores

3. retrieve-memory

Purpose: Get specific memory by ID Usage:
Retrieve memory 550e8400-e29b-41d4-a716-446655440000
Parameters:
  • memory_id (required): UUID of the memory
Returns: Full memory details

4. reinforce-memory

Purpose: Explicitly boost memory salience Usage:
Reinforce the React Server Components memory with Deep Learning profile
Parameters:
  • memory_id (required): UUID or description to identify memory
  • profile (optional): quick_refresh, maintenance, deep_learning, emergency
Returns: Old salience, new salience, profile used

5. prune-memories

Purpose: Remove low-salience memories Usage:
Prune memories with salience below 0.1
Parameters:
  • threshold (optional): Salience threshold (default: 0.1)
  • sector (optional): Only prune specific sector
  • dry_run (optional): Preview what would be deleted
Returns: Number of memories pruned

6. delete-memory

Purpose: Permanently delete a memory Usage:
Delete memory about yesterday's standup
Parameters:
  • memory_id (required): UUID or description
Returns: Confirmation of deletion

MCP Resources

In addition to tools, MCP provides 4 resources for browsing:

memory://list

Purpose: Paginated list of all memories Usage:
Show me memory://list
Returns: List with pagination, sorted by salience

memory://stats

Purpose: System statistics Usage:
Show me memory://stats
Returns:
  • Total memories
  • By-sector breakdown
  • Average salience
  • Hot memories count
  • Total waypoints
  • Embeddings today

memory://recent

Purpose: Recently accessed memories Usage:
Show me memory://recent
Returns: Last 10 memories you accessed

memory://

Purpose: Specific memory by ID Usage:
Show me memory://550e8400-e29b-41d4-a716-446655440000
Returns: Full memory details

Real-World Usage Examples

Morning Routine: Catch Up on Context

You: Show me my recent memories

AI: Here are your 10 most recently accessed memories:
1. "Sprint planning priorities for this week..."
2. "Customer feedback from Acme Corp call..."
3. "Bug investigation: API timeout issues..."
...

You: Search memories for "sprint priorities"

AI: Found 3 relevant memories... [lists priorities with context]

During Coding: Access Team Patterns

You: How does our team handle API errors?

AI: [Automatically searches memories]
Found memory: "Our API error handling pattern:
1. Throw custom exceptions in services
2. Catch in controller, return JsonResponse
3. Log with context (user ID, request ID)
4. Return standardized format: {message, error_code, details}
..."

You: Store that I just implemented this pattern in PaymentController

AI: [Uses store-memory tool]
Memory stored! Tagged: error-handling, payment-controller, implementation

After Meetings: Capture Decisions

You: Store a memory about today's architecture meeting:
"We decided to use PostgreSQL over MongoDB for the analytics
service because we need strong ACID guarantees for financial
data and complex joins for reporting. MongoDB was considered
but rejected due to lack of joins.

Decision made: 2025-01-25
Participants: Sarah (Lead), Marcus (Backend), Aisha (DevOps)"

Tags: architecture, database, postgresql, adr
Sector: reflective

AI: [Uses store-memory tool]
Memory stored with Reflective sector (693-day half-life) since
this is a strategic architectural decision. Tagged appropriately
and connected to 3 related memories about database choices.

Research Mode: Build Knowledge Graph

You: Search memories for "React state management" with full context

AI: [Searches with waypoint expansion enabled]
Found 8 memories:

Direct matches:
1. "React hooks (useState, useEffect)..."
2. "Context API for global state..."

Connected via waypoints (1 hop):
3. "Redux vs Zustand comparison..."
4. "When to lift state up in React..."

Connected via waypoints (2 hops):
5. "Performance optimization with useMemo..."
6. "React Server Components and state..."

Connected via waypoints (3 hops):
7. "Team decision: avoid Redux for new projects..."
8. "Common state management pitfalls..."

Would you like me to summarize the key insights across all these?

Multi-Client Workflow

The Power: Same memory system across all tools simultaneously! Example Day: 9am - Claude Desktop (Strategy Discussion):
Store Reflective memory: "Q1 2025 strategic focus:
1. Scale to 10K customers
2. Launch Enterprise tier
3. Expand into EU market
Decision made in leadership meeting 2025-01-25"
10am - Continue in VS Code (Coding):
Search: "Enterprise tier requirements"
→ Gets the Q1 strategy memory you stored 1 hour ago
→ Plus related memories about enterprise features
→ You have full context while coding
2pm - Cursor (Different Project):
Search: "Q1 priorities"
→ Same strategic memory appears
→ Context follows you across IDEs and projects
5pm - Zed (Writing Docs):
Search: "EU market expansion"
→ Gets the strategy memory
→ Plus any related customer feedback stored throughout the day
One memory system. Complete context. Everywhere.

Troubleshooting

Check:
  1. API key is correct (no extra spaces)
  2. Tenant ID matches your repository
  3. Network allows HTTPS to api.ulpi.io
  4. JSON syntax is valid (use jsonlint.com)
  5. AI assistant fully restarted (not just reload)
Test: curl https://api.ulpi.io/health should return 200
Solutions:
  1. Verify you edited the correct config file
  2. Check file permissions (must be readable)
  3. Ensure your AI assistant supports MCP (check version)
  4. Look for errors in assistant’s debug logs
  5. Try removing and re-adding the server config
Fix:
  1. Regenerate API key in admin panel
  2. Verify “Bearer ” prefix in Authorization header
  3. Check key hasn’t expired (1-year expiry)
  4. Ensure key is scoped to correct tenant
Debug:
  1. Check admin panel → Memory Resource (correct tenant?)
  2. Verify store-memory returns a memory ID
  3. Check Embedding Logs for generation errors
  4. Ensure you haven’t hit storage limits

Advanced Configuration

Custom Transport Settings

If your network requires specific settings:
{
  "ulpi-memory": {
    "url": "https://api.ulpi.io/mcp/memory",
    "headers": {
      "Authorization": "Bearer YOUR_API_KEY",
      "X-Tenant-ID": "your-repo-id"
    },
    "transport": "sse",
    "timeout": 30000,
    "retry": {
      "maxAttempts": 3,
      "backoff": 1000
    }
  }
}

Multiple Repositories

Configure multiple memory systems:
{
  "mcpServers": {
    "ulpi-memory-personal": {
      "url": "https://api.ulpi.io/mcp/memory",
      "headers": {
        "Authorization": "Bearer personal_api_key",
        "X-Tenant-ID": "personal-repo-id"
      },
      "transport": "sse"
    },
    "ulpi-memory-work": {
      "url": "https://api.ulpi.io/mcp/memory",
      "headers": {
        "Authorization": "Bearer work_api_key",
        "X-Tenant-ID": "work-repo-id"
      },
      "transport": "sse"
    }
  }
}
Your AI can then access both memory systems!

Next Steps


Need help with MCP setup? Contact support@ulpi.io with your AI assistant details and we’ll help you get connected.