Skip to main content

Give Your AI Agents Instant Access to Your Entire Codebase

Stop wasting 30 minutes daily searching for that one config file, API pattern, or deployment guide. Your AI can’t read your internal docs. It hallucinates outdated patterns. You copy-paste the same wiki pages into every chat. Your team’s knowledge is trapped across 47 repositories, 12 wikis, and hundreds of README files. Your documentation is invisible to AI. ULPI makes it searchable in seconds.

25x More Efficient

2,000 tokens of relevant docs vs 50,000 tokens of full documentationYour AI gets exactly what it needs, nothing more

Sub-50ms Search Latency

Semantic search returns precise answers in under 50 millisecondsFaster than you can type the question

100% Up-to-Date

Automatic sync on every git push—documentation never staleWhat’s in your repo is what AI sees

40+ AI Tools

Works with Claude Code, Cursor, Windsurf, Continue, and moreOne integration, every tool

The Problem: Documentation Black Holes

You’ve experienced this frustration:
  • 📄 “Where was that deployment guide again?” - 15 minutes searching across repos
  • 🔍 Your AI hallucinates patterns because it can’t access your actual docs
  • 📚 Copy-paste the same setup instructions into every AI chat session
  • 👥 New developers ask the same questions for 2 weeks straight
  • 🏗️ Architecture decisions forgotten because they’re buried in old PRs
  • 30 minutes daily hunting for information you know exists somewhere
Your knowledge is scattered:
  • 47 repositories with README files
  • 12 different wiki systems
  • Architecture Decision Records (ADRs) lost in old branches
  • API docs that may or may not match current code
  • Setup guides that worked 6 months ago (maybe)
  • Slack threads with the real answers (good luck finding them)
The cost:
  • 30 minutes/day per developer searching for documentation
  • 2 weeks to onboard new team members
  • AI assistants that don’t know your patterns and give generic advice
  • Duplicate documentation because no one knows what exists where

Why Traditional Search Fails

GitHub’s code search:
  • ❌ Keyword-only (must know exact terms)
  • ❌ One repository at a time
  • ❌ No AI assistant integration
  • ❌ Doesn’t understand technical concepts
Manual wiki search:
  • ❌ Stale documentation (last updated: 2 years ago)
  • ❌ Requires humans to search and read
  • ❌ Can’t help AI assistants
  • ❌ Slow and frustrating
Copy-pasting docs into AI chats:
  • ❌ Wastes 50,000 tokens on full documentation
  • ❌ Context fills up quickly
  • ❌ Must manually find and paste
  • ❌ Always out of date

ULPI Documentation makes your entire codebase documentation instantly searchable—for you and your AI assistants.
  • Without ULPI Documentation
  • With ULPI Documentation

The Manual Grind

Monday 9 AM: You need deployment instructions
1. Check main repo README → Not there
2. Search GitHub → Too many results
3. Check wiki → Deployment guide from 2022 (outdated?)
4. Ask in Slack → Wait 2 hours for response
5. Finally find it in infrastructure repo README
Time wasted: 45 minutesYour AI assistant:
You: "How do we deploy to production?"

AI: "Here's a general guide to deploying applications..."
[Generates generic steps that don't match your actual process]
You: Copy-paste 3,000 lines of deployment docs into chat AI: “Based on your documentation…” [Uses 40,000 tokens] Context limit: Approaching fast
Every. Single. Session.

How It Works: 3 Steps to Searchable Documentation

1

Connect Your Repositories (2 minutes)

One click to connect GitHub, GitLab, Bitbucket, or Gitea:
# We automatically discover and index:
- README.md files (all directories)
- docs/ and documentation/ directories
- Wiki pages
- Architecture Decision Records (ADRs)
- API documentation
- Setup and deployment guides
- Code comments (optional)
Multi-repo support: Connect 5, 50, or 500 repositories Private repos: Fully supported with read-only OAuth access Monorepos: Handles large monorepos efficiently
No code changes required. Just point ULPI at your repositories.
2

Automatic Indexing (happens in background)

We process your documentation using AI:What we extract:
  • Document structure (headings, sections)
  • Code examples and snippets
  • Technical concepts and terminology
  • Relationships between documents
  • API endpoints and parameters
  • Configuration patterns
Semantic embeddings:
  • Every document chunk gets vector embeddings
  • Enables meaning-based search (not just keywords)
  • Understands synonyms, related concepts, technical jargon
Automatic sync:
  • Webhooks trigger on every git push
  • Re-indexing happens in under 1 minute
  • Always reflects current state of your repos
First-time indexing: 2-5 minutes for typical repositories (1,000 files) Updates after push: Under 1 minute
3

Search from Any AI Tool (instant)

Your AI assistant queries ULPI automatically:
// In Claude Code, Cursor, Windsurf, Continue, etc.
User: "How do I configure Redis caching?"

// Behind the scenes:
AIULPI DocumentationSemantic Search
Returns relevant docs from all repositories
AI synthesizes answer with your actual documentation
Response includes source links

Latency: <50ms
Tokens used: 2,000 (vs 50,000 for full docs)
Accuracy: Your actual patterns, not generic advice
No manual steps. AI gets documentation context automatically.
What is semantic search?Unlike keyword search (exact word matching), semantic search understands meaning:Example:
  • Query: “How do I handle database schema changes?”
  • Keyword search: Looks for exact words “database”, “schema”, “changes”
  • Semantic search: Understands you’re asking about migrations, versioning, deployment—finds relevant docs even if they say “migration” instead of “schema changes”
Result: Finds the right documentation even if it uses different terminology.

25x Token Efficiency: The Math

Traditional approach (loading full documentation):
Your project has:
- 50 README files × 500 lines each = 25,000 lines
- 15 architecture docs × 300 lines each = 4,500 lines
- 10 API docs × 400 lines each = 4,000 lines
- 20 setup guides × 200 lines each = 4,000 lines

Total: ~37,500 lines ≈ 50,000 tokens

Problem:
- Fills AI's context window quickly
- 90% is irrelevant to current question
- Can only fit docs in chat 3-4 times before hitting limits
- Wastes tokens on unrelated information
ULPI approach (semantic search returns only relevant sections):
User asks: "How do I deploy to staging?"

ULPI returns:
- Deployment guide (relevant sections): 800 tokens
- Environment config docs: 400 tokens
- CI/CD pipeline description: 600 tokens
- Staging-specific notes: 200 tokens

Total: ~2,000 tokens

Benefit:
- 25x more efficient (50,000 → 2,000 tokens)
- Only relevant information included
- Can ask 25 questions vs 1 before filling context
- AI focuses on what matters

Real-World Impact

Questions Per Session

Without ULPI: 1-2 questions before context full With ULPI: 25+ questions25x more productivity in single AI session

Context Relevance

Without ULPI: 10% relevant (90% noise) With ULPI: 95% relevant9.5x better signal-to-noise ratio

Manual Search Time

Without ULPI: 30 min/day finding docs With ULPI: 0 min/day125 hours saved annually per developer

AI Accuracy

Without ULPI: Generic advice, often wrong With ULPI: Your actual patterns100% alignment with team standards

Key Features

Natural Language Queries

Ask questions exactly as you would ask a colleague:
"How do we handle authentication in the API?"
→ Finds auth middleware docs, JWT configuration, security policies

"What's our deployment process for staging?"
→ Finds CI/CD configs, deployment scripts, staging environment setup

"Where are database models defined?"
→ Finds ORM documentation, schema files, migration guides

"Show me examples of writing unit tests"
→ Finds test examples, testing conventions, Jest/Mocha configs

"What environment variables are required?"
→ Finds .env.example files, deployment docs, config guides

"How do I add a new API endpoint?"
→ Finds REST conventions, routing patterns, example endpoints

"What's our code review process?"
→ Finds contribution guidelines, PR templates, review checklists
The AI understands:
  • Technical synonyms (“deploy” = “ship” = “release”)
  • Related concepts (“auth” includes OAuth, JWT, sessions)
  • Your project’s terminology
  • Context and intent

Search across your entire organization at once:
  • All Repositories
  • Branch-Specific
Cross-repo search (most powerful):
Query: "Which services use Redis?"

ULPI searches all 47 repositories and finds:
- backend-api/README.md: Redis for session storage
- cache-service/docs/architecture.md: Redis cluster setup
- worker-service/README.md: Redis for job queues
- infrastructure/deployment.md: Redis configuration
- mobile-backend/docs/caching.md: Redis caching layer
Perfect for:
  • Microservices architectures
  • Understanding system-wide patterns
  • Finding all usages of a technology
  • Discovering service dependencies

Automatic Synchronization

Your documentation is always current:
Push-based updates (no manual sync needed):
  1. Developer pushes code to GitHub/GitLab
  2. Webhook fires instantly
  3. ULPI re-indexes changed files only
  4. Search updated in under 60 seconds
What triggers re-indexing:
  • Git push to any connected branch
  • Pull request merges
  • Wiki page updates
  • Manual trigger from ULPI dashboard
Speed:
  • Small changes (1-10 files): 10-20 seconds
  • Medium changes (10-100 files): 30-60 seconds
  • Large changes (100+ files): 1-3 minutes
  • Full re-index (all repos): 5-10 minutes
Zero configuration. Webhooks are set up automatically when you connect repositories.
Automatically discovered documentation:Always indexed:
  • README.md, README.txt (all directories)
  • docs/ and documentation/ directories
  • .md and .mdx files
  • Architecture Decision Records (ADRs)
  • CONTRIBUTING.md, CHANGELOG.md
Optionally indexed:
  • Wiki pages (enable per repository)
  • Code comments and docstrings (enable globally)
  • *.txt files in doc directories
  • Jupyter notebooks (.ipynb)
Automatically excluded:
  • node_modules/, vendor/
  • Build and dist directories
  • .git/ directories
  • Binary files
  • Large files (>1MB)
Custom exclusions: Use .ulpiignore file (works like .gitignore)
How we make documentation searchable:Structure extraction:
  • Headings hierarchy (H1, H2, H3)
  • Table of contents
  • Section relationships
  • Cross-references between docs
Content understanding:
  • Code blocks and syntax
  • API endpoints and parameters
  • Configuration examples
  • Technical terminology
Metadata preservation:
  • File paths and line numbers
  • Last updated timestamps
  • Authors (from git history)
  • Related documents
Semantic chunking:
  • Break documents into logical sections
  • Each chunk gets vector embedding
  • Enables precise retrieval
  • Returns only relevant sections (not entire files)
Result: When you ask “How do I deploy?”, you get the 200-line deployment section—not the entire 2,000-line infrastructure guide.

Integration with 40+ AI Tools

Works with every major AI coding assistant via MCP (Model Context Protocol):
  • AI Chat Apps
  • Code Editors & IDEs
  • CLI & Terminal Tools
  • How It Works in Practice
Desktop AI assistants:
  • Claude Desktop - Anthropic’s official app (best integration)
  • Perplexity Desktop - AI research with your docs
  • BoltAI - macOS native AI assistant
Setup: Add ULPI MCP server to claude_desktop_config.json

Real-World Use Cases

The Challenge:New developer (Sarah) joins your team:
  • Day 1: “How do I set up my local environment?”
  • Day 3: “Where’s the API documentation?”
  • Week 1: “What’s our git workflow?”
  • Week 2: Still asking basic questions in Slack
Time to productivity: 2-3 weeksWith ULPI Documentation:Day 1:
Sarah asks AI: "How do I set up my development environment?"

AI queries ULPI → Finds setup.md from infrastructure repo:
"Here's the complete setup process:
1. Install Node 18+, PostgreSQL 14, Redis
2. Clone repos: git clone ...
3. Copy .env.example to .env
4. Run npm install && npm run db:migrate
5. Start dev server: npm run dev
[Links to setup.md with troubleshooting guide]"
Day 2:
Sarah: "What are our coding standards?"
AI → ULPI → "We use ESLint with Airbnb config..."
Day 3:
Sarah: "How do I create a pull request?"
AI → ULPI → "Follow our PR template in .github/PULL_REQUEST_TEMPLATE.md..."
Time to productivity with ULPI: 3 daysSarah’s feedback: “I didn’t have to bother the team with basic questions. The AI knew everything.”
The Challenge:AI assistants give generic advice that doesn’t match your project:
You: "Add error handling to this function"

AI (without ULPI): "Here's generic try-catch:
try {
  // your code
} catch (err) {
  console.log(err); // ❌ Not your pattern
}"
With ULPI Documentation:
You: "Add error handling to this function"

AI (with ULPI):
[Queries ULPI: "error handling patterns"]
[Finds docs/conventions.md: "Use Winston logger, return {success, error}"]

"Based on your error handling conventions in docs/conventions.md:

try {
  // your code
} catch (error) {
  logger.error('Function failed', { error, context });
  return { success: false, error: 'Operation failed' };
}

This follows your standard pattern:
- Winston logger for all errors
- Structured logging with context
- {success, error} response format"
AI knows YOUR patterns. Code matches team standards from day one.
The Challenge:You inherit a 5-year-old codebase:
  • Original developers gone
  • Documentation scattered and outdated
  • Critical knowledge in code comments
  • Architecture decisions lost to history
Questions you have:
  • “Why is the payment system designed this way?”
  • “What was the rationale for this database choice?”
  • “How does authentication actually work?”
Answers: Lost in 50,000 lines of code and 200 old PRsWith ULPI Documentation:
Query: "Why do we use Redis for sessions instead of database?"

ULPI finds:
1. Architecture Decision Record (ADR-012-session-storage.md):
   "Chose Redis over PostgreSQL sessions because:
    - 10x faster read performance
    - Automatic expiration (TTL)
    - Horizontal scaling easier
    - Session data is ephemeral anyway"

2. Performance analysis (docs/benchmarks.md):
   "Redis: 50,000 req/sec vs PostgreSQL: 5,000 req/sec"

3. Implementation notes in code comments:
   "// Redis session store - see ADR-012 for rationale"
Historical context recovered. Understand why, not just what.Enable code comments indexing:
{
  "includeCodeComments": true  // Indexes docstrings, JSDoc, PHPDoc
}
The Challenge:SOC 2 audit next week. Auditor asks:
  • “Show me your data retention policy”
  • “Where’s your incident response procedure?”
  • “How do you handle PII data?”
  • “What’s your access control documentation?”
Your documentation is scattered:
  • Security wiki (last updated: 2021)
  • GDPR compliance doc (in legal repo)
  • PII handling (in backend README)
  • Access control (in IAM configuration comments)
  • Incident response (Notion doc? Confluence?)
Time to find everything: 3-4 hoursWith ULPI Documentation:
Query: "Show all documentation related to PII data handling"

ULPI finds across all repositories:
- security/GDPR-compliance.md: Data retention schedules
- backend/docs/data-privacy.md: PII encryption standards
- infrastructure/access-control.md: Who can access PII
- legal/privacy-policy.md: User data rights
- backend/models/user.ts comments: "// PII fields encrypted at rest"

Export all results → PDF for auditor
Time to prepare audit documentation: 10 minutesBonus: All docs include source links, so auditor can verify they’re real and current.
The Challenge:3 AM. Production is down. Error: ECONNREFUSED Redis connection failedFrantic search:
  • “Where’s the Redis configuration?”
  • “What’s the failover procedure?”
  • “Who has access to Redis dashboard?”
  • Searching GitHub while systems are down…
Time pressure: Every minute = lost revenueWith ULPI Documentation:
Query: "Redis connection failed - troubleshooting steps"

ULPI instantly finds (in 0.3 seconds):

1. infrastructure/runbooks/redis-troubleshooting.md:
   "ECONNREFUSED errors:
    1. Check Redis status: kubectl get pods -n redis
    2. Verify connection string in K8s secrets
    3. Failover to replica: helm upgrade --set redis.primary=redis-replica
    4. Contact: DevOps on-call (see on-call.md)"

2. infrastructure/redis-config.yaml:
   Connection details, failover settings

3. docs/on-call.md:
   On-call engineer contact info
Resolution time: 5 minutes instead of 45 minutesQuote from SRE: “ULPI runbook search saved us during a 3 AM outage. Found the failover procedure in seconds.”

Pricing

Starter

$29/monthPerfect for small teams:
  • 5 repositories
  • 100,000 tokens/month
  • Semantic search
  • Auto-sync on push
  • MCP integration (40+ tools)
  • Email support
~15-20 searches/day

Professional

$99/monthBest for growing teams:
  • 25 repositories
  • 500,000 tokens/month
  • Everything in Starter
  • Advanced filters
  • Team collaboration
  • Priority support
  • Usage analytics
~75-100 searches/day

Enterprise

$299/monthFor large organizations:
  • Unlimited repositories
  • 2,000,000 tokens/month
  • Everything in Professional
  • Custom integrations
  • SSO / SAML
  • SLA guarantees
  • Dedicated support
~300+ searches/day
Bundle and save 12-20%!Combine Documentation with Skills, Memory, Coordination, and Hooks:
  • Coordination + Memory + Documentation: Save 15%
  • Full Stack (all 5 products): Save 20%
See all 33 bundle combinations
Flexible overage billing:
  • Additional tokens: $20 per 100,000 tokens
  • No service interruption when you exceed limit
  • Only pay for what you use
  • Set usage alerts and caps in dashboard
Detailed pricing

Success Metrics

Teams using ULPI Documentation report:

30 Min Daily Saved

Per developer, per day125 hours saved annually = 12,500at12,500 at 100/hr

2 Weeks → 3 Days

Onboarding time reduction78% faster time-to-productivity for new hires

25x Token Efficiency

2,000 tokens vs 50,000 tokensAI context lasts 25x longer in conversations

Zero Hallucinations

AI uses your actual documentation100% accuracy on team-specific patterns

Getting Started

Ready to make your documentation instantly searchable?
1

Create Free Account

Sign up at app.ulpi.ioConnect with GitHub account (OAuth - read-only access)
2

Connect Repositories

Select repositories to index:
  • Private or public repositories
  • Choose specific repos or all organization repos
  • Webhooks set up automatically
Indexing starts immediately (2-5 minutes for first sync)
3

Generate API Key

Create API key for your AI tools:
  • Go to Settings → API Keys
  • Click “Generate New Key”
  • Copy token (starts with ulpi_sk_...)
Scope to specific repositories for security
4

Configure AI Assistant

Add ULPI to your favorite tool:Choose your tool:Takes 2 minutes to configure MCP server
5

Start Searching!

Ask your AI assistant documentation questions:
"How do I deploy to production?"
"What's our authentication flow?"
"Show me API endpoint examples"
"Where are environment variables defined?"
AI gets instant answers from your actual docs.

Detailed Setup Guide

Follow our step-by-step guide with screenshots and troubleshooting

FAQ

Automatically indexed:
  • Markdown files: .md, .mdx
  • Text files: .txt (in doc directories)
  • README files: Any README.* in any directory
  • Wiki pages: Repository wikis (optional)
  • Notebooks: .ipynb Jupyter notebooks (optional)
Optionally indexed:
  • Code comments: JSDoc, PHPDoc, Python docstrings (enable globally)
  • Configuration files: .yaml, .json with embedded docs
Standard locations auto-discovered:
  • README.md (all directories)
  • docs/ and documentation/ directories
  • ADR/ and adr/ (Architecture Decision Records)
  • .github/ (GitHub-specific docs)
  • Wiki pages (if enabled)
Custom exclusions: .ulpiignore file (works like .gitignore)
Near-instant via webhooks:
  • Git pushWebhook fires (within 1 second)
  • ULPI starts re-indexing changed files only
  • Updates searchable in 30-60 seconds
Timing examples:
  • 1-5 files changed: 15-30 seconds
  • 10-50 files changed: 45-90 seconds
  • 100+ files changed: 2-3 minutes
  • Full repository (1,000 files): 3-5 minutes
Manual trigger: Force re-index anytime from dashboard (usually not needed)
Real-world speed: Push to main → AI can search new docs in under 1 minute
Yes! Full private repository support:Security:
  • OAuth with read-only access (can’t modify your code)
  • API keys are tenant-scoped (only your team can access)
  • Data encrypted in transit and at rest
  • GDPR and SOC 2 compliant
Privacy:
  • Your docs never used for AI training
  • Not shared with other ULPI customers
  • Only accessible via your API keys
  • Deleted within 30 days if you cancel
Enterprise features:
  • Self-hosted Git servers (GitLab, Gitea on-premise)
  • VPC peering for extra security
  • SSO / SAML authentication
  • Audit logs of all access
Security details
Tokens measure AI processing:What uses tokens:
  • Your search query length
  • Documentation content returned
  • Semantic embedding computation
Typical token usage:
  • Simple query (“How to deploy?”): 500-1,000 tokens
  • Complex query with examples: 2,000-3,000 tokens
  • Very detailed query returning multiple docs: 3,000-5,000 tokens
Average team usage:
  • Solo developer: 20,000-50,000 tokens/month
  • Small team (5 people): 50,000-150,000 tokens/month
  • Medium team (15 people): 200,000-400,000 tokens/month
  • Large team (50 people): 1,000,000+ tokens/month
Dashboard tracking:
  • Real-time token usage graphs
  • Projected monthly usage
  • Per-repository breakdown
  • Usage alerts when approaching limit
Pro tip: Starter plan includes 100k tokens. If your team uses 125k, you pay 29+29 + 5 (25k overage) = $34 total. No service interruption.
Yes! Multiple access methods:1. Web Dashboard (app.ulpi.io):
  • Search via web interface
  • Browse repositories
  • View documentation directly
  • Export search results
2. REST API:
  • Direct API access
  • Integrate into your apps
  • Build custom tools
  • Automate workflows
3. AI Assistants (most popular):
  • Claude Code, Cursor, Windsurf, Continue, etc.
  • AI queries ULPI automatically
  • Most natural experience
Recommendation: AI integration provides best UX (just ask questions), but not required.
Search infrastructure:
  • Embeddings: OpenAI text-embedding-3-large (best accuracy)
  • Vector Store: Typesense (semantic similarity search)
  • Ranking: Hybrid scoring (semantic + keyword relevance)
  • Hosting: Dedicated infrastructure (low latency)
Important privacy notes:
  • ULPI processes searches, but your AI assistant (Claude, GPT-4, etc.) generates the final answers
  • Your documentation is never used for training AI models
  • We only create embeddings for search indexing (discarded after use)
  • OpenAI’s embeddings API zero data retention policy
Privacy policy
Yes! Multiple exclusion methods:1. .ulpiignore file (recommended):
# In repository root
# Exclude generated docs
/docs/api/generated/

# Exclude drafts
**/drafts/**
**/*.draft.md

# Exclude specific files
CHANGELOG.md
/docs/archive/

# Exclude by pattern
*.bak
*-old.md
2. Dashboard settings:
  • Configure patterns via web UI
  • Apply to specific repositories
  • Global exclusions (all repos)
3. Default exclusions (automatic):
  • node_modules/, vendor/
  • .git/, .github/workflows/
  • Build directories: dist/, build/, out/
  • Binary files
  • Files >1MB
Use cases:
  • Exclude auto-generated API docs
  • Skip draft documentation
  • Ignore archived content
  • Prevent indexing sensitive files (shouldn’t be in repo anyway!)

Next Steps