Give Your AI Agents Instant Access to Your Entire Codebase
Stop wasting 30 minutes daily searching for that one config file, API pattern, or deployment guide.Your AI can’t read your internal docs. It hallucinates outdated patterns. You copy-paste the same wiki pages into every chat. Your team’s knowledge is trapped across 47 repositories, 12 wikis, and hundreds of README files.Your documentation is invisible to AI. ULPI makes it searchable in seconds.
25x More Efficient
2,000 tokens of relevant docs vs 50,000 tokens of full documentationYour AI gets exactly what it needs, nothing more
Sub-50ms Search Latency
Semantic search returns precise answers in under 50 millisecondsFaster than you can type the question
100% Up-to-Date
Automatic sync on every git push—documentation never staleWhat’s in your repo is what AI sees
40+ AI Tools
Works with Claude Code, Cursor, Windsurf, Continue, and moreOne integration, every tool
1. Check main repo README → Not there2. Search GitHub → Too many results3. Check wiki → Deployment guide from 2022 (outdated?)4. Ask in Slack → Wait 2 hours for response5. Finally find it in infrastructure repo README
Time wasted: 45 minutesYour AI assistant:
You: "How do we deploy to production?"AI: "Here's a general guide to deploying applications..."[Generates generic steps that don't match your actual process]
You: Copy-paste 3,000 lines of deployment docs into chat
AI: “Based on your documentation…” [Uses 40,000 tokens]
Context limit: Approaching fastEvery. Single. Session.
You ask AI: "How do we deploy to production?"ULPI automatically:1. Searches all 47 repositories (semantic search)2. Finds deployment.md in infrastructure repo3. Returns relevant sections only (2,000 tokens)4. Provides direct links to sourceTime: 0.3 seconds
Time wasted: 0 minutesYour AI assistant:
You: "How do we deploy to production?"AI: [Queries ULPI Documentation automatically]"Based on your deployment guide in infrastructure/docs/deployment.md:1. Run `npm run build` to create production bundle2. Push to deploy branch: `git push origin main:deploy`3. GitHub Actions automatically deploys to AWS4. Monitor at https://status.yourcompany.com[Links to deployment.md:15-45]"
AI knows your actual process. No hallucinations. No manual search.Every session starts with full context automatically.
One click to connect GitHub, GitLab, Bitbucket, or Gitea:
# We automatically discover and index:- README.md files (all directories)- docs/ and documentation/ directories- Wiki pages- Architecture Decision Records (ADRs)- API documentation- Setup and deployment guides- Code comments (optional)
Multi-repo support: Connect 5, 50, or 500 repositories
Private repos: Fully supported with read-only OAuth access
Monorepos: Handles large monorepos efficiently
No code changes required. Just point ULPI at your repositories.
2
Automatic Indexing (happens in background)
We process your documentation using AI:What we extract:
Document structure (headings, sections)
Code examples and snippets
Technical concepts and terminology
Relationships between documents
API endpoints and parameters
Configuration patterns
Semantic embeddings:
Every document chunk gets vector embeddings
Enables meaning-based search (not just keywords)
Understands synonyms, related concepts, technical jargon
Automatic sync:
Webhooks trigger on every git push
Re-indexing happens in under 1 minute
Always reflects current state of your repos
First-time indexing: 2-5 minutes for typical repositories (1,000 files)
Updates after push: Under 1 minute
3
Search from Any AI Tool (instant)
Your AI assistant queries ULPI automatically:
// In Claude Code, Cursor, Windsurf, Continue, etc.User: "How do I configure Redis caching?"// Behind the scenes:AI → ULPI Documentation → Semantic Search→ Returns relevant docs from all repositories→ AI synthesizes answer with your actual documentation→ Response includes source linksLatency: <50msTokens used: 2,000 (vs 50,000 for full docs)Accuracy: Your actual patterns, not generic advice
No manual steps. AI gets documentation context automatically.
What is semantic search?Unlike keyword search (exact word matching), semantic search understands meaning:Example:
Query: “How do I handle database schema changes?”
Keyword search: Looks for exact words “database”, “schema”, “changes”
Semantic search: Understands you’re asking about migrations, versioning, deployment—finds relevant docs even if they say “migration” instead of “schema changes”
Result: Finds the right documentation even if it uses different terminology.
Traditional approach (loading full documentation):
Your project has:- 50 README files × 500 lines each = 25,000 lines- 15 architecture docs × 300 lines each = 4,500 lines- 10 API docs × 400 lines each = 4,000 lines- 20 setup guides × 200 lines each = 4,000 linesTotal: ~37,500 lines ≈ 50,000 tokensProblem:- Fills AI's context window quickly- 90% is irrelevant to current question- Can only fit docs in chat 3-4 times before hitting limits- Wastes tokens on unrelated information
ULPI approach (semantic search returns only relevant sections):
User asks: "How do I deploy to staging?"ULPI returns:- Deployment guide (relevant sections): 800 tokens- Environment config docs: 400 tokens- CI/CD pipeline description: 600 tokens- Staging-specific notes: 200 tokensTotal: ~2,000 tokensBenefit:- 25x more efficient (50,000 → 2,000 tokens)- Only relevant information included- Can ask 25 questions vs 1 before filling context- AI focuses on what matters
Ask questions exactly as you would ask a colleague:
"How do we handle authentication in the API?"→ Finds auth middleware docs, JWT configuration, security policies"What's our deployment process for staging?"→ Finds CI/CD configs, deployment scripts, staging environment setup"Where are database models defined?"→ Finds ORM documentation, schema files, migration guides"Show me examples of writing unit tests"→ Finds test examples, testing conventions, Jest/Mocha configs"What environment variables are required?"→ Finds .env.example files, deployment docs, config guides"How do I add a new API endpoint?"→ Finds REST conventions, routing patterns, example endpoints"What's our code review process?"→ Finds contribution guidelines, PR templates, review checklists
Query: "Which services use Redis?"ULPI searches all 47 repositories and finds:- backend-api/README.md: Redis for session storage- cache-service/docs/architecture.md: Redis cluster setup- worker-service/README.md: Redis for job queues- infrastructure/deployment.md: Redis configuration- mobile-backend/docs/caching.md: Redis caching layer
Perfect for:
Microservices architectures
Understanding system-wide patterns
Finding all usages of a technology
Discovering service dependencies
Filter to specific repositories:
Query: "authentication"Repositories: ["backend-api", "frontend-app"]Returns: Only auth docs from those 2 reposUse case: Focus search on relevant services
Search specific branches:
# Search main branch (stable docs)Query: "deployment"Branch: "main"# Search feature branch (work-in-progress docs)Query: "new authentication flow"Branch: "feature/oauth-implementation"
Use case: Reference docs for specific versions or features
VSCode - Via Continue, Cline, GitHub Copilot extensions
Zed - High-performance editor
PhpStorm, WebStorm, IntelliJ - JetBrains IDEs
Setup: Install MCP extension and configure ULPI server
Command-line AI assistants:
Claude Code - Terminal-based coding (Anthropic)
Warp - AI-powered terminal
Amazon Q CLI - AWS’s AI assistant
Aider - AI pair programming in terminal
OpenHands - Open-source AI coding
Setup: Add ULPI to MCP configuration
Seamless AI integration example:
# In Cursor, Claude Code, or any MCP-compatible tool:You: "Create a new API endpoint for user profiles"AI (thinking):1. Query ULPI: "What are our API conventions?"2. ULPI returns: REST patterns, auth middleware, response formats3. AI generates code following YOUR patternsAI: "I'll create the endpoint following your REST conventions from docs/api-standards.md: - Route: GET /api/v1/users/:id/profile - Auth: Use validateJWT middleware from auth.ts - Response: Return { success, data } format - Error handling: HTTP 404 for not found, 401 for unauthorized Here's the implementation: [Code that matches YOUR project standards]"
The Challenge:New developer (Sarah) joins your team:
Day 1: “How do I set up my local environment?”
Day 3: “Where’s the API documentation?”
Week 1: “What’s our git workflow?”
Week 2: Still asking basic questions in Slack
Time to productivity: 2-3 weeksWith ULPI Documentation:Day 1:
Sarah asks AI: "How do I set up my development environment?"AI queries ULPI → Finds setup.md from infrastructure repo:"Here's the complete setup process:1. Install Node 18+, PostgreSQL 14, Redis2. Clone repos: git clone ...3. Copy .env.example to .env4. Run npm install && npm run db:migrate5. Start dev server: npm run dev[Links to setup.md with troubleshooting guide]"
Day 2:
Sarah: "What are our coding standards?"AI → ULPI → "We use ESLint with Airbnb config..."
Day 3:
Sarah: "How do I create a pull request?"AI → ULPI → "Follow our PR template in .github/PULL_REQUEST_TEMPLATE.md..."
Time to productivity with ULPI: 3 daysSarah’s feedback: “I didn’t have to bother the team with basic questions. The AI knew everything.”
🤖 AI-Assisted Coding (Generic → Project-Specific)
The Challenge:AI assistants give generic advice that doesn’t match your project:
You: "Add error handling to this function"AI (without ULPI): "Here's generic try-catch:try { // your code} catch (err) { console.log(err); // ❌ Not your pattern}"
With ULPI Documentation:
You: "Add error handling to this function"AI (with ULPI):[Queries ULPI: "error handling patterns"][Finds docs/conventions.md: "Use Winston logger, return {success, error}"]"Based on your error handling conventions in docs/conventions.md:try { // your code} catch (error) { logger.error('Function failed', { error, context }); return { success: false, error: 'Operation failed' };}This follows your standard pattern:- Winston logger for all errors- Structured logging with context- {success, error} response format"
AI knows YOUR patterns. Code matches team standards from day one.
Time: 30-45 minutes per questionWith ULPI Documentation:
Query: "Which services use PostgreSQL?"ULPI searches all 47 repositories in 0.2 seconds:Results:- user-service: PostgreSQL for user data- order-service: PostgreSQL for transactions- analytics-service: Separate PostgreSQL instance- backup-service: PostgreSQL dump scripts- infrastructure: PostgreSQL k8s deploymentAlso finds:- Connection pool configurations- Migration strategies- Backup procedures- Monitoring setup
Time: 0.2 secondsQuote from DevOps lead: “Finding infrastructure info went from 30 minutes to 30 seconds. Game-changer for our microservices.”
📚 Legacy Codebase Understanding
The Challenge:You inherit a 5-year-old codebase:
Original developers gone
Documentation scattered and outdated
Critical knowledge in code comments
Architecture decisions lost to history
Questions you have:
“Why is the payment system designed this way?”
“What was the rationale for this database choice?”
“How does authentication actually work?”
Answers: Lost in 50,000 lines of code and 200 old PRsWith ULPI Documentation:
Query: "Why do we use Redis for sessions instead of database?"ULPI finds:1. Architecture Decision Record (ADR-012-session-storage.md): "Chose Redis over PostgreSQL sessions because: - 10x faster read performance - Automatic expiration (TTL) - Horizontal scaling easier - Session data is ephemeral anyway"2. Performance analysis (docs/benchmarks.md): "Redis: 50,000 req/sec vs PostgreSQL: 5,000 req/sec"3. Implementation notes in code comments: "// Redis session store - see ADR-012 for rationale"
Historical context recovered. Understand why, not just what.Enable code comments indexing:
The Challenge:SOC 2 audit next week. Auditor asks:
“Show me your data retention policy”
“Where’s your incident response procedure?”
“How do you handle PII data?”
“What’s your access control documentation?”
Your documentation is scattered:
Security wiki (last updated: 2021)
GDPR compliance doc (in legal repo)
PII handling (in backend README)
Access control (in IAM configuration comments)
Incident response (Notion doc? Confluence?)
Time to find everything: 3-4 hoursWith ULPI Documentation:
Query: "Show all documentation related to PII data handling"ULPI finds across all repositories:- security/GDPR-compliance.md: Data retention schedules- backend/docs/data-privacy.md: PII encryption standards- infrastructure/access-control.md: Who can access PII- legal/privacy-policy.md: User data rights- backend/models/user.ts comments: "// PII fields encrypted at rest"Export all results → PDF for auditor
Time to prepare audit documentation: 10 minutesBonus: All docs include source links, so auditor can verify they’re real and current.
⚡ Emergency Troubleshooting (Production Down)
The Challenge:3 AM. Production is down. Error: ECONNREFUSED Redis connection failedFrantic search:
“Where’s the Redis configuration?”
“What’s the failover procedure?”
“Who has access to Redis dashboard?”
Searching GitHub while systems are down…
Time pressure: Every minute = lost revenueWith ULPI Documentation:
Query: "Redis connection failed - troubleshooting steps"ULPI instantly finds (in 0.3 seconds):1. infrastructure/runbooks/redis-troubleshooting.md: "ECONNREFUSED errors: 1. Check Redis status: kubectl get pods -n redis 2. Verify connection string in K8s secrets 3. Failover to replica: helm upgrade --set redis.primary=redis-replica 4. Contact: DevOps on-call (see on-call.md)"2. infrastructure/redis-config.yaml: Connection details, failover settings3. docs/on-call.md: On-call engineer contact info
Resolution time: 5 minutes instead of 45 minutesQuote from SRE: “ULPI runbook search saved us during a 3 AM outage. Found the failover procedure in seconds.”
# In repository root# Exclude generated docs/docs/api/generated/# Exclude drafts**/drafts/****/*.draft.md# Exclude specific filesCHANGELOG.md/docs/archive/# Exclude by pattern*.bak*-old.md
2. Dashboard settings:
Configure patterns via web UI
Apply to specific repositories
Global exclusions (all repos)
3. Default exclusions (automatic):
node_modules/, vendor/
.git/, .github/workflows/
Build directories: dist/, build/, out/
Binary files
Files >1MB
Use cases:
Exclude auto-generated API docs
Skip draft documentation
Ignore archived content
Prevent indexing sensitive files (shouldn’t be in repo anyway!)