Hook Performance & Optimization
Every millisecond matters in developer experience. ULPI Hooks are designed for minimal latency impact. This guide explains hook performance characteristics, optimization strategies, and how to tune hooks for your workflow.Performance Overview
Total Hook Overhead
Across all 8 hooks:- Average latency per hook: ~140ms
- Total MCP calls: 14 (distributed across hooks)
- Network requests: 14 (one per MCP call)
- Perceived user impact: Minimal to none
Hook-by-Hook Performance
session-start: 150ms average
- Latency Breakdown
- User Impact
- Optimization
Total: ~150ms
Actual perceived: ~150ms (parallel requests optimized)
| Operation | Time | Type |
|---|---|---|
| Hook trigger | 5ms | Local |
| Register agent (MCP) | 50ms | Network |
| Fetch inbox (MCP) | 40ms | Network |
| List reservations (MCP) | 35ms | Network |
| Search memories (MCP) | 60ms | Network |
| Render dashboard | 10ms | Local |
| Total | 200ms | - |
pre-tool-use:edit: 120ms average (Fastest Hook)
- Latency Breakdown
- User Impact
- Caching Strategy
- Optimization
Total: ~120ms
With cache hit: ~10ms (94% faster)
| Operation | Time | Type |
|---|---|---|
| Hook trigger | 5ms | Local |
| Check cache (file reservation) | 2ms | Local |
| List reservations (MCP) | 60ms | Network (if cache miss) |
| Create reservation (MCP) | 50ms | Network (if creating) |
| Decision logic | 3ms | Local |
| Total | 120ms | - |
post-tool-use:edit: 80ms average (Non-Blocking)
- Latency Breakdown
- User Impact
- Optimization
Total: ~80ms (async)
Perceived: 0ms (runs in background)
| Operation | Time | Type |
|---|---|---|
| Hook trigger | 5ms | Local |
| Get action items (MCP) | 50ms | Network (async) |
| Update task status (MCP) | 30ms | Network (async) |
| Log audit event | 10ms | Local (async) |
| Total | 95ms | Async |
pre-compact: 200ms average (Slowest Hook)
- Latency Breakdown
- User Impact
- Optimization
Total: ~200ms
Why slower: AI-powered context analysis requires processing
| Operation | Time | Type |
|---|---|---|
| Hook trigger | 5ms | Local |
| Analyze conversation context | 80ms | AI processing |
| Extract important memories | 40ms | AI processing |
| Store memories bulk (MCP) | 70ms | Network |
| Decision logic | 5ms | Local |
| Total | 200ms | - |
user-prompt-submit: 100ms average
- Latency Breakdown
- User Impact
- Optimization
Total: ~100ms
| Operation | Time | Type |
|---|---|---|
| Hook trigger | 5ms | Local |
| Fetch inbox (urgent only) (MCP) | 50ms | Network |
| Get action items (MCP) | 40ms | Network |
| Decision logic | 5ms | Local |
| Total | 100ms | - |
stop: 90ms average
- Latency Breakdown
- User Impact
- Optimization
Total: ~90ms
| Operation | Time | Type |
|---|---|---|
| Hook trigger | 5ms | Local |
| Get action items (blocking only) (MCP) | 60ms | Network |
| Decision logic | 25ms | Local |
| Total | 90ms | - |
session-end: 110ms average (Non-Blocking)
- Latency Breakdown
- User Impact
- Optimization
Total: ~110ms (async)
Perceived: 0ms (runs as session closes)
| Operation | Time | Type |
|---|---|---|
| Hook trigger | 5ms | Local |
| Release reservations (MCP) | 60ms | Network (async) |
| Update agent status (MCP) | 35ms | Network (async) |
| Store snapshot (MCP) | 50ms | Network (async) |
| Total | 150ms | Async |
subagent-stop: 95ms average (Non-Blocking)
- Latency Breakdown
- User Impact
- Optimization
Total: ~95ms (async)
Perceived: 0ms (runs as subagent stops)
| Operation | Time | Type |
|---|---|---|
| Hook trigger | 5ms | Local |
| Store learnings (MCP) | 50ms | Network (async) |
| Release reservations (MCP) | 30ms | Network (async) |
| Update task (MCP) | 20ms | Network (async) |
| Total | 105ms | Async |
Performance Summary Table
| Hook | Avg Latency | Blocking? | Frequency | User Impact | Optimization Potential |
|---|---|---|---|---|---|
| session-start | 150ms | ✅ Yes | Low | Low | High (⬇️ 67%) |
| pre-edit | 120ms | ✅ Yes | High | None | Medium (⬇️ 94% with cache) |
| post-edit | 80ms | ❌ No | High | None | High (⬇️ 90%) |
| pre-compact | 200ms | ✅ Yes | Very Low | Very Low | Medium (⬇️ 35%) |
| user-prompt | 100ms | ✅ Yes | High | Low | High (⬇️ 70%) |
| stop | 90ms | ✅ Yes | Low | None | Low (safety-critical) |
| session-end | 110ms | ❌ No | Low | None | Medium (⬇️ 45%) |
| subagent-stop | 95ms | ❌ No | Medium | None | Medium (⬇️ 57%) |
- Fastest: pre-edit (120ms, but 10ms with cache)
- Slowest: pre-compact (200ms, but fires rarely)
- Most frequent: pre-edit, user-prompt (every edit/prompt)
- Most optimizable: session-start (67%), post-edit (90%), user-prompt (70%)
Optimization Strategies
Strategy 1: Aggressive Caching
Use case: Rapid development with frequent edits- pre-edit: 120ms → 10ms (92% faster for cached files)
- session-start: 150ms → 90ms (40% faster for cached memories)
Strategy 2: Minimal Features
Use case: Solo developer, no team coordination needed- session-start: 150ms → 50ms (67% faster)
- post-edit: 80ms → 10ms (87% faster)
- user-prompt: 100ms → 30ms (70% faster)
- session-end: 110ms → 60ms (45% faster)
Strategy 3: Memory-Focused
Use case: Complex project, context preservation critical- pre-compact: 200ms → 250ms (25% slower, but saves more)
- session-start: 150ms → 200ms (33% slower, but loads more)
Strategy 4: Team Coordination Priority
Use case: Multi-agent team, zero conflicts critical- pre-edit: Cache hit rate decreases, but fresher data
- user-prompt: Always shows urgent messages
- session-start: Full dashboard for team awareness
Network Optimization
Reduce API Calls
Batch MCP tool calls when possible:- session-start: 5 API calls → 2 batched calls (40% fewer requests)
Use Regional API Endpoints
Connect to closest ULPI API region:api-us-west.ulpi.io(Oregon)api-us-east.ulpi.io(Virginia)api-eu-west.ulpi.io(Ireland)api-ap-south.ulpi.io(Singapore)
Performance Monitoring
Track Hook Execution
Enable Performance Logging
Troubleshooting Slow Hooks
Diagnosis Steps
1
Identify Slow Hook
2
Enable Debug Logging
3
Check Network Latency
4
Review Hook Configuration
5
Apply Optimizations
Based on findings, apply relevant optimization strategy.
Best Practices
Profile before optimizing
Profile before optimizing
Recommendation: Run performance analysis before making changesBenefit: Know exactly what improved and by how much
Optimize based on usage patterns
Optimize based on usage patterns
Recommendation: Different optimizations for different workflowsSolo developer: Aggressive caching, minimal features
Team coordination: Fresh data, full features
Long-term project: Memory-focused, context preservation
Monitor performance over time
Monitor performance over time
Recommendation: Track performance trends weeklyWatch for:
- Increasing latency (API degradation)
- Decreasing cache hit rate (changing patterns)
- Slow hooks (network issues)
Use appropriate cache TTL
Use appropriate cache TTL
Recommendation: Match cache duration to team size
- Solo: 60s (very stable)
- 2-3 people: 30s (mostly stable)
- 4-6 people: 15s (moderately dynamic)
- 7+ people: 10s (very dynamic, default)
Success Metrics
Teams optimizing hooks report:94% Cache Hit Rate
For pre-edit hooks with 30s cache TTLRapid file editing nearly instant
Average 85ms Latency
Across all hooks (vs 140ms default)40% faster with optimization
Sub-100ms P95 Latency
95% of hooks complete in under 100msConsistently fast experience
Zero User Complaints
“We never notice the hooks are there”Imperceptible in daily workflows