Skip to main content

Common Workflows

Practical examples from actual customers. Learn how teams use Memory Module for knowledge management, customer context, sales, research, and content creation.

Personal Knowledge Management

Goal: Build a second brain that learns what’s important to you.
1

Capture as You Learn

Store: "React Server Components eliminate client-side
data fetching by allowing async fetch on server.
Reduces bundle size and improves initial page load."

Tags: react, server-components, performance
Sector: semantic
System automatically creates semantic understanding and links.
2

Let Connections Form

Don’t manually organize! Waypoints automatically connect:
  • Server Components → Next.js App Router
  • Server Components → Bundle Optimization
  • Server Components → Hydration Issues
Search “React performance” → Get all connected context.
3

Reinforce Key Concepts

Reinforce Server Components memory with Deep Learning
Slows decay—perfect for core concepts you’re studying.
4

Let Temporary Fade

Store: "Trying to figure out why useEffect runs twice..."
Sector: episodic
Once resolved, fades naturally. No filing required!
Result: Knowledge base mirrors your learning—important concepts stay fresh, temporary details fade.

Software Development Team

Goal: Maintain team context about decisions, patterns, “why we did it this way.”
  • Architectural Decisions
  • Code Patterns
  • Bug Investigations
  • Team Onboarding
Store Reflective: "We chose PostgreSQL over MongoDB because:
1. Strong ACID guarantees for financial data
2. Complex joins for reporting
3. Team SQL expertise
4. Mature BI ecosystem

Decision: 2025-01-15
Participants: Sarah, Marcus, Aisha
Alternative: MongoDB (rejected for lack of joins)"

Tags: architecture, database, postgresql, adr
Sector: reflective
Why Reflective: Strategic decisions with lasting impact (693-day half-life)

Customer Support Memory

Goal: Remember customer context across interactions.

Customer Preferences

Store Semantic: "Customer: Acme Corp
- Prefers email over phone (mentioned 3x)
- CEO Sarah responds faster on Slack
- Quarterly reviews Q2 and Q4
- Sensitive about pricing
- Timezone: PST (9am-5pm)"

Tags: acme-corp, customer-preferences
Sector: semantic

Support Interactions

Store Episodic: "Call with Acme - 2025-01-20
Issue: Rate limiting (429 errors)
Resolution: Upgraded Starter → Pro
Sentiment: Initially frustrated → happy
Follow-up: Check in after 1 week
Agent: Marcus"

Tags: acme-corp, support, upgrade
Sector: episodic

Emotional Context

Store Emotional: "Acme CEO very excited
about Memory Module during demo.
Mentioned this solves their 'context loss
problem' from months of struggle.
High enthusiasm for expansion."

Tags: acme-corp, positive-sentiment
Sector: emotional
Sentiment fades fast (35 days) but valuable for immediate interactions

Before Next Call

Agent searches: “Acme Corp”AI returns:
  • Customer preferences (always accessible)
  • Recent interactions (last 3 months, ranked)
  • Positive sentiment from last call
  • Pro plan details and usage
  • Similar customers with rate issues (waypoints)
Result: Agent has complete context. Customer feels remembered.

Sales & CRM Context

Goal: Personalized conversations with rich prospect context.
  • Prospect Research
  • Sales Calls
  • Competitive Intelligence
  • Before Follow-Up
Store: "Prospect: TechStartup Inc
- Series A: $10M (Jan 2025)
- 25 engineers, planning 2x in 6 months
- Stack: Next.js, PostgreSQL, AWS
- Pain: 'drowning in context switching' (Twitter)
- Current: Notion + linear notes (frustrated)
- Decision maker: CTO David Kim (Twitter follower)"

Tags: prospect, techstartup-inc, series-a
Sector: semantic

Research & Academic Work

Goal: Build interconnected knowledge while studying complex topics.
1

Store Research Papers

Store: "Paper: 'Attention Is All You Need' (Vaswani et al., 2017)
- Transformer architecture replacing RNNs
- Self-attention mechanism allows parallel processing
- Positional encoding for sequence order
- Multi-head attention captures different relationships
- Foundation for BERT, GPT, modern LLMs

Citation: arXiv:1706.03762"

Tags: transformers, attention-mechanism, deep-learning, paper
Sector: semantic
2

Connections Form Automatically

Related memories, waypoints auto-connect:
  • Attention mechanism → BERT pre-training
  • Transformers → GPT architecture
  • Parallel processing → Training efficiency
  • Positional encoding → Sequence modeling
Search “how do transformers handle sequence order?” → Positional encoding WITH related transformer concepts.
3

Study Notes with Spaced Repetition

Store: "Study: Understanding self-attention
- Q, K, V matrices project input embeddings
- Attention score = softmax(Q·K^T / √d_k)
- Higher scores = more relevant tokens
- Multiple heads capture different relationships
Confidence: Medium (need practice)"

Tags: transformers, self-attention, study-notes
Sector: semantic

Then: Reinforce with Deep Learning profile weekly
4

Ephemeral Exploration Fades

Store: "Trying to understand why √d_k scaling is needed...
Something about variance stabilization? Read appendix again."
Sector: episodic
Fades in ~46 days unless revisited. No cleanup needed!
Result: Core concepts persist and strengthen, exploration notes fade, connections form automatically.

Content Creation & Writing

Goal: Maintain idea continuity without drowning in notes.

Capturing Ideas

Store: "Article: 'Why Your AI Assistant
Needs Memory Like Your Brain'
Angle: Compare vector DBs vs cognitive memory
Hook: 'You remember your wedding, not yesterday's lunch'
Target: Developers building AI apps
Status: Idea stage"

Tags: article-ideas, ai-memory
Sector: episodic

Research & Quotes

Store: "Quote for AI memory article:
'The faintest ink is more powerful than the
strongest memory' - Chinese proverb
Context: Contrast traditional note-taking vs
cognitive memory (selective retention)

Source: Research notes, Jan 2025"

Tags: quotes, ai-memory-article
Sector: semantic

Draft Tracking

As you work on article, references reinforce automatically:
  • Search “AI memory article” multiple times → Reinforcement increases
  • Related research appears through waypoints
  • Unused ideas fade naturally

Completed Work

After publishing:
Store: "Published: 'Why Your AI Assistant Needs Memory'
- Medium, 2025-02-01
- Performance: 1,200 views first week, 145 comments
- Top comment: Request for technical deep-dive
- Follow-up idea: 'Building Cognitive Memory Systems'"

Tags: published, ai-memory, portfolio
Sector: reflective
Ideas that gain traction (repeated access) stay strong. One-off thoughts fade. Published work persists.

Integration Patterns

Cross-Tool Memory

Same memory system, all tools, complete context everywhere:
  • Claude Desktop (morning): Store today’s priorities
  • Continue (VS Code): Search for PR context while coding
  • Cursor (different project): Search for Q1 planning
All tools access same memories instantly.

API Automation

// Auto-store from GitHub PRs
async function storePRContext(pr) {
  await fetch('https://api.ulpi.io/api/v1/memories', {
    method: 'POST',
    body: JSON.stringify({
      content: `PR #${pr.number}: ${pr.title}\n${pr.body}`,
      sector: 'episodic',
      tags: ['github', 'pr', ...pr.labels]
    })
  });
}

// Scheduled reinforcement
async function reinforceCriticalDocs() {
  const critical = await searchMemories({ tags: ['critical', 'documentation'] });
  for (const memory of critical) {
    await reinforceMemory(memory.id, 'maintenance');
  }
}

Pro Tips

Don’t migrate existing notes at once. Start by storing new information naturally. The system builds value incrementally.
Resist urge to manually organize everything. Let unimportant information fade. Important stuff naturally reinforces through access.
Take 2 seconds to choose right sector:
  • Strategic decisions → Reflective (long-lasting)
  • How-to guides → Procedural (medium)
  • Facts/reference → Semantic (medium-long)
  • Events/meetings → Episodic (short-medium)
  • Sentiment/feedback → Emotional (short)
3-7 tags is ideal:
  • Project/product names
  • Key entities (people, companies)
  • Topic areas
Don’t over-tag—semantic search handles the rest!
Don’t wait for memories to fade. Reinforce critical info immediately:
  • Strategic decisions → Emergency profile
  • Core docs → Deep Learning profile weekly
  • Reference materials → Maintenance profile bi-weekly
Monitor memory health:
  • Check sector distribution (balanced across types?)
  • Identify hot memories (frequently accessed?)
  • Review decay trends (anything critical fading?)
  • Prune regularly (keep knowledge base focused)

Next Steps


Real-world workflows from actual customers. Start small, trust the system, let connections emerge naturally.