Skip to main content

Hook Performance & Optimization

Every millisecond matters in developer experience. ULPI Hooks are designed for minimal latency impact. This guide explains hook performance characteristics, optimization strategies, and how to tune hooks for your workflow.

Performance Overview

Total Hook Overhead

Across all 8 hooks:
  • Average latency per hook: ~140ms
  • Total MCP calls: 14 (distributed across hooks)
  • Network requests: 14 (one per MCP call)
  • Perceived user impact: Minimal to none
Key insight: Most hooks are non-blocking or fire at times when latency doesn’t affect user experience.

Hook-by-Hook Performance

session-start: 150ms average

  • Latency Breakdown
  • User Impact
  • Optimization
Total: ~150ms
OperationTimeType
Hook trigger5msLocal
Register agent (MCP)50msNetwork
Fetch inbox (MCP)40msNetwork
List reservations (MCP)35msNetwork
Search memories (MCP)60msNetwork
Render dashboard10msLocal
Total200ms-
Actual perceived: ~150ms (parallel requests optimized)

pre-tool-use:edit: 120ms average (Fastest Hook)

  • Latency Breakdown
  • User Impact
  • Caching Strategy
  • Optimization
Total: ~120ms
OperationTimeType
Hook trigger5msLocal
Check cache (file reservation)2msLocal
List reservations (MCP)60msNetwork (if cache miss)
Create reservation (MCP)50msNetwork (if creating)
Decision logic3msLocal
Total120ms-
With cache hit: ~10ms (94% faster)

post-tool-use:edit: 80ms average (Non-Blocking)

  • Latency Breakdown
  • User Impact
  • Optimization
Total: ~80ms (async)
OperationTimeType
Hook trigger5msLocal
Get action items (MCP)50msNetwork (async)
Update task status (MCP)30msNetwork (async)
Log audit event10msLocal (async)
Total95msAsync
Perceived: 0ms (runs in background)

pre-compact: 200ms average (Slowest Hook)

  • Latency Breakdown
  • User Impact
  • Optimization
Total: ~200ms
OperationTimeType
Hook trigger5msLocal
Analyze conversation context80msAI processing
Extract important memories40msAI processing
Store memories bulk (MCP)70msNetwork
Decision logic5msLocal
Total200ms-
Why slower: AI-powered context analysis requires processing

user-prompt-submit: 100ms average

  • Latency Breakdown
  • User Impact
  • Optimization
Total: ~100ms
OperationTimeType
Hook trigger5msLocal
Fetch inbox (urgent only) (MCP)50msNetwork
Get action items (MCP)40msNetwork
Decision logic5msLocal
Total100ms-

stop: 90ms average

  • Latency Breakdown
  • User Impact
  • Optimization
Total: ~90ms
OperationTimeType
Hook trigger5msLocal
Get action items (blocking only) (MCP)60msNetwork
Decision logic25msLocal
Total90ms-

session-end: 110ms average (Non-Blocking)

  • Latency Breakdown
  • User Impact
  • Optimization
Total: ~110ms (async)
OperationTimeType
Hook trigger5msLocal
Release reservations (MCP)60msNetwork (async)
Update agent status (MCP)35msNetwork (async)
Store snapshot (MCP)50msNetwork (async)
Total150msAsync
Perceived: 0ms (runs as session closes)

subagent-stop: 95ms average (Non-Blocking)

  • Latency Breakdown
  • User Impact
  • Optimization
Total: ~95ms (async)
OperationTimeType
Hook trigger5msLocal
Store learnings (MCP)50msNetwork (async)
Release reservations (MCP)30msNetwork (async)
Update task (MCP)20msNetwork (async)
Total105msAsync
Perceived: 0ms (runs as subagent stops)

Performance Summary Table

HookAvg LatencyBlocking?FrequencyUser ImpactOptimization Potential
session-start150ms✅ YesLowLowHigh (⬇️ 67%)
pre-edit120ms✅ YesHighNoneMedium (⬇️ 94% with cache)
post-edit80ms❌ NoHighNoneHigh (⬇️ 90%)
pre-compact200ms✅ YesVery LowVery LowMedium (⬇️ 35%)
user-prompt100ms✅ YesHighLowHigh (⬇️ 70%)
stop90ms✅ YesLowNoneLow (safety-critical)
session-end110ms❌ NoLowNoneMedium (⬇️ 45%)
subagent-stop95ms❌ NoMediumNoneMedium (⬇️ 57%)
Key Insights:
  • Fastest: pre-edit (120ms, but 10ms with cache)
  • Slowest: pre-compact (200ms, but fires rarely)
  • Most frequent: pre-edit, user-prompt (every edit/prompt)
  • Most optimizable: session-start (67%), post-edit (90%), user-prompt (70%)

Optimization Strategies

Strategy 1: Aggressive Caching

Use case: Rapid development with frequent edits
{
  "caching": {
    "enabled": true,
    "reservationCacheTTL": 30,      // 30 seconds (default: 10)
    "memoryCacheTTL": 300,           // 5 minutes (default: 60)
    "agentStatusCacheTTL": 60        // 1 minute (default: 30)
  }
}
Impact:
  • pre-edit: 120ms → 10ms (92% faster for cached files)
  • session-start: 150ms → 90ms (40% faster for cached memories)
Tradeoff: Slightly stale data (max 30s old) Recommendation: Use for active development on stable teams

Strategy 2: Minimal Features

Use case: Solo developer, no team coordination needed
{
  "hooks": {
    "sessionStart": {
      "showDashboard": false,
      "loadMemories": false
    },
    "postEdit": {
      "showObligations": false,
      "updateTasks": false
    },
    "userPrompt": {
      "showPendingAcks": false
    },
    "sessionEnd": {
      "createSnapshot": false
    }
  }
}
Impact:
  • session-start: 150ms → 50ms (67% faster)
  • post-edit: 80ms → 10ms (87% faster)
  • user-prompt: 100ms → 30ms (70% faster)
  • session-end: 110ms → 60ms (45% faster)
Tradeoff: Lose team coordination features, memory, obligations Recommendation: Only for solo projects with no team

Strategy 3: Memory-Focused

Use case: Complex project, context preservation critical
{
  "hooks": {
    "preCompact": {
      "minImportance": 0.4,          // Save more memories
      "includeCode": true,
      "maxMemoriesPerSnapshot": 150
    },
    "sessionStart": {
      "loadMemories": true,
      "minMemoryImportance": 0.4     // Load more memories
    },
    "sessionEnd": {
      "createSnapshot": true         // Always snapshot
    }
  }
}
Impact:
  • pre-compact: 200ms → 250ms (25% slower, but saves more)
  • session-start: 150ms → 200ms (33% slower, but loads more)
Tradeoff: Higher latency, but better context preservation Recommendation: Use for long-term projects with complex architecture

Strategy 4: Team Coordination Priority

Use case: Multi-agent team, zero conflicts critical
{
  "hooks": {
    "preEdit": {
      "blockConflicts": true,
      "showGuidance": true,
      "cacheDuration": 5             // Shorter cache for real-time updates
    },
    "userPrompt": {
      "showUrgentMessages": true,
      "requireAck": true
    },
    "sessionStart": {
      "showDashboard": true
    }
  }
}
Impact:
  • pre-edit: Cache hit rate decreases, but fresher data
  • user-prompt: Always shows urgent messages
  • session-start: Full dashboard for team awareness
Tradeoff: Slightly higher latency, but better coordination Recommendation: Use for active team development

Network Optimization

Reduce API Calls

Batch MCP tool calls when possible:
{
  "network": {
    "batchRequests": true,           // Batch multiple MCP calls
    "batchWindow": 50,                // Wait 50ms to batch calls
    "parallelRequests": 3             // Max 3 parallel requests
  }
}
Impact:
  • session-start: 5 API calls → 2 batched calls (40% fewer requests)

Use Regional API Endpoints

Connect to closest ULPI API region:
{
  "apiUrl": "https://api-us-west.ulpi.io"  // Use regional endpoint
}
Regions:
  • api-us-west.ulpi.io (Oregon)
  • api-us-east.ulpi.io (Virginia)
  • api-eu-west.ulpi.io (Ireland)
  • api-ap-south.ulpi.io (Singapore)
Impact: 60ms → 30ms (50% faster API calls from regional proximity)

Performance Monitoring

Track Hook Execution

# View hook performance stats
ulpi hooks performance

# Output:
🎯 Hook Performance (Last 24 Hours)

Hook Statistics:
├─ session-start (12 executions)
│  ├─ Avg: 145ms
│  ├─ P50: 140ms
│  ├─ P95: 180ms
│  └─ P99: 220ms

├─ pre-edit (1,247 executions)
│  ├─ Avg: 18ms (94% cache hits)
│  ├─ P50: 10ms
│  ├─ P95: 120ms
│  └─ P99: 150ms

├─ pre-compact (3 executions)
│  ├─ Avg: 195ms
│  ├─ P50: 190ms
│  ├─ P95: 210ms
│  └─ P99: 220ms

└─ user-prompt (452 executions)
   ├─ Avg: 102ms
   ├─ P50: 95ms
   ├─ P95: 130ms
   └─ P99: 160ms

Network Efficiency:
- Total API calls: 2,156
- Cached responses: 1,103 (51%)
- Average latency: 58ms
- P95 latency: 95ms

Recommendations:
✓ Cache hit rate excellent (51%)
⚠️  Consider increasing pre-compact minImportance (3 snapshots in 24h)
✓ pre-edit cache performing well (94% hits)

Enable Performance Logging

{
  "debug": true,
  "performance": {
    "logSlowHooks": true,          // Log hooks > 200ms
    "logAllHooks": false,           // Log every hook execution
    "slowThreshold": 200            // Define "slow" as > 200ms
  }
}
Logs:
[2024-01-15T10:30:45Z] Hook: pre-compact, Latency: 245ms ⚠️  SLOW
[2024-01-15T10:30:45Z] └─ Breakdown: analyze(95ms), extract(55ms), store(85ms), other(10ms)

Troubleshooting Slow Hooks

Diagnosis Steps

1

Identify Slow Hook

ulpi hooks performance --slow-only
Find which hooks exceed threshold.
2

Enable Debug Logging

ulpi config set debug=true
ulpi config set performance.logSlowHooks=true
3

Check Network Latency

ulpi network ping
Output:
Pinging ULPI API (https://api.ulpi.io)...
├─ Latency: 85ms
├─ Region: us-west
├─ Suggested: api-us-east.ulpi.io (45ms) ✓
Switch to faster region if suggested.
4

Review Hook Configuration

ulpi config list --hook session-start
Check if unnecessary features enabled.
5

Apply Optimizations

Based on findings, apply relevant optimization strategy.

Best Practices

Recommendation: Run performance analysis before making changes
# Baseline performance
ulpi hooks performance --baseline

# Make configuration changes

# Compare after
ulpi hooks performance --compare-to-baseline
Benefit: Know exactly what improved and by how much
Recommendation: Different optimizations for different workflowsSolo developer: Aggressive caching, minimal features Team coordination: Fresh data, full features Long-term project: Memory-focused, context preservation
Recommendation: Track performance trends weekly
ulpi hooks performance --weekly-report
Watch for:
  • Increasing latency (API degradation)
  • Decreasing cache hit rate (changing patterns)
  • Slow hooks (network issues)
Recommendation: Match cache duration to team size
  • Solo: 60s (very stable)
  • 2-3 people: 30s (mostly stable)
  • 4-6 people: 15s (moderately dynamic)
  • 7+ people: 10s (very dynamic, default)
Rationale: Larger teams change state more frequently

Success Metrics

Teams optimizing hooks report:

94% Cache Hit Rate

For pre-edit hooks with 30s cache TTLRapid file editing nearly instant

Average 85ms Latency

Across all hooks (vs 140ms default)40% faster with optimization

Sub-100ms P95 Latency

95% of hooks complete in under 100msConsistently fast experience

Zero User Complaints

“We never notice the hooks are there”Imperceptible in daily workflows

Next Steps