Skip to main content

What are User Profiles?

User profiles are automatically maintained collections of facts about your users that Supermemory builds from all their interactions and content. Think of it as a persistent “about me” document that’s always up-to-date and instantly accessible. Instead of searching through memories every time you need context about a user, profiles give you:
  • Instant access to comprehensive user information
  • Automatic updates as users interact with your system
  • Two-tier structure separating permanent facts from temporary context
Profile data can be appended to the system prompt so that it’s always sent to your LLM and you don’t need to run multiple queries.

Static vs Dynamic Profiles

Profiles are intelligently divided into two categories:

Static Profile

Long-term, stable facts that define who the user is These are facts that rarely change - the foundational information about a user that remains consistent over time. Examples:
  • “Sarah Chen is a senior software engineer at TechCorp”
  • “Sarah specializes in distributed systems and Kubernetes”
  • “Sarah has a PhD in Computer Science from MIT”
  • “Sarah prefers technical documentation over video tutorials”

Dynamic Profile

Recent context and temporary information These are current activities, recent interests, and temporary states that provide immediate context. Examples:
  • “Sarah is currently migrating the payment service to microservices”
  • “Sarah recently started learning Rust for a side project”
  • “Sarah is preparing for a conference talk next month”
  • “Sarah is debugging a memory leak in the authentication service”

How are profiles different from search?

Why We Built Profiles

The Problem with Search-Only Approaches

Traditional memory systems rely entirely on search, which has fundamental limitations:
  1. Search is too narrow: When you search for “project updates”, you miss that the user prefers bullet points, works in PST timezone, and uses specific technical terminology.
  2. Search is repetitive: Every chat message triggers multiple searches for basic context that rarely changes.
  3. Search misses relationships: Individual memory chunks don’t capture the full picture of who someone is and how different facts relate.
Profiles solve these problems by maintaining a persistent, holistic view of each user: Profiles don’t replace search - they complement it perfectly:
1

Profile provides foundation

The user’s profile gives your LLM comprehensive background context about who they are, what they know, and what they’re working on.
2

Search adds specificity

When you need specific information (like “error in deployment yesterday”), search finds those exact memories.
3

Combined for perfect context

Your LLM gets both the broad understanding from profiles AND the specific details from search.

Real-World Example

Imagine a user asks: “Can you help me debug this?” Without profiles: The LLM has no context about the user’s expertise level, current projects, or debugging preferences. With profiles: The LLM knows:
  • The user is a senior engineer (adjust technical level)
  • They’re working on a payment service migration (likely context)
  • They prefer command-line tools over GUIs (tool suggestions)
  • They recently had issues with memory leaks (possible connection)

Technical Implementation

Endpoint Details

Based on the API reference, the profile endpoint provides a simple interface: Endpoint: POST /v4/profile

Request Parameters

ParameterTypeRequiredDescription
containerTagstringYesThe container tag (usually user ID) to get profiles for
qstringNoOptional search query to include search results with the profile

Response Structure

The response includes both profile data and optional search results:
{
  "profile": {
    "static": [
      "User is a software engineer",
      "User specializes in Python and React"
    ],
    "dynamic": [
      "User is working on Project Alpha",
      "User recently started learning Rust"
    ]
  },
  "searchResults": {
    "results": [...],  // Only if 'q' parameter was provided
    "total": 15,
    "timing": 45.2
  }
}

Code Examples

Basic Profile Retrieval

// Direct API call using fetch
const response = await fetch('https://api.supermemory.ai/v4/profile', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    containerTag: 'user_123'
  })
});

const data = await response.json();

console.log("Static facts:", data.profile.static);
console.log("Dynamic context:", data.profile.dynamic);

// Use in your LLM prompt
const systemPrompt = `
User Context:
${data.profile.static?.join('\n') || ''}

Current Activity:
${data.profile.dynamic?.join('\n') || ''}

Please provide personalized assistance based on this context.
`;
Sometimes you want both the user’s profile AND specific search results:
// Get profile with search results
const response = await fetch('https://api.supermemory.ai/v4/profile', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    containerTag: 'user_123',
    q: 'deployment errors yesterday'  // Optional search query
  })
});

const data = await response.json();

// Now you have both profile and specific search results
const profile = data.profile;
const searchResults = data.searchResults?.results || [];

// Combine for comprehensive context
const context = {
  userBackground: profile.static,
  currentContext: profile.dynamic,
  specificInfo: searchResults.map(r => r.content)
};

Integration with Chat Applications

Here’s how to use profiles in a real chat application:
async function handleChatMessage(userId: string, message: string) {
  // Get user profile for personalization
  const profileResponse = await fetch('https://api.supermemory.ai/v4/profile', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      containerTag: userId
    })
  });
  
  const profileData = await profileResponse.json();

  // Build personalized system prompt
  const systemPrompt = buildPersonalizedPrompt(profileData.profile);

  // Send to your LLM with context
  const response = await llm.chat({
    messages: [
      { role: "system", content: systemPrompt },
      { role: "user", content: message }
    ]
  });

  return response;
}

function buildPersonalizedPrompt(profile: any) {
  return `You are assisting a user with the following context:

ABOUT THE USER:
${profile.static?.join('\n') || 'No profile information yet.'}

CURRENT CONTEXT:
${profile.dynamic?.join('\n') || 'No recent activity.'}

Provide responses that are personalized to their expertise level, 
preferences, and current work context.`;
}

AI SDK Integration

The Supermemory AI SDK provides a more elegant way to use profiles through the withSupermemory middleware, which automatically handles profile retrieval and injection into your LLM prompts.

Automatic Profile Integration

The AI SDK’s withSupermemory middleware abstracts away all the profile endpoint complexity:
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

// Automatically injects user profile into every LLM call
const modelWithMemory = withSupermemory(openai("gpt-4"), "user_123")

const result = await generateText({
  model: modelWithMemory,
  messages: [{ role: "user", content: "What do you know about me?" }],
})

// The model automatically has access to the user's profile!

Memory Search Modes

The AI SDK supports three modes for memory retrieval:

Profile Mode (Default)

Retrieves user profile memories without query filtering:
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

// Uses profile mode by default - gets all user profile memories
const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123")

// Explicitly specify profile mode
const modelWithProfile = withSupermemory(openai("gpt-4"), "user-123", { 
  mode: "profile" 
})

const result = await generateText({
  model: modelWithMemory,
  messages: [{ role: "user", content: "What do you know about me?" }],
})

Query Mode

Searches memories based on the user’s message:
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithQuery = withSupermemory(openai("gpt-4"), "user-123", { 
  mode: "query" 
})

const result = await generateText({
  model: modelWithQuery,
  messages: [{ role: "user", content: "What's my favorite programming language?" }],
})

Full Mode

Combines both profile and query results:
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithFull = withSupermemory(openai("gpt-4"), "user-123", { 
  mode: "full" 
})

const result = await generateText({
  model: modelWithFull,
  messages: [{ role: "user", content: "Tell me about my preferences" }],
})

Learn More About AI SDK

Explore the full capabilities of the Supermemory AI SDK, including tools for adding memories, searching, and automatic profile injection.

Understanding the Modes (Without AI SDK)

When using the API directly without the AI SDK:
  • Profile Only: Call /v4/profile and add the profile data to your system prompt. This gives persistent user context without query-specific search.
  • Query Only: Use the /v4/search endpoint with the user’s specific question to find relevant memories based on their current query. Read the search docs.
  • Full Mode: Combine both approaches - add profile data to the system prompt AND use the search endpoint for conversational context based on the user’s specific query. This provides the most comprehensive context.
// Full mode example without AI SDK
async function getFullContext(userId: string, userQuery: string) {
  // 1. Get user profile for system prompt
  const profileResponse = await fetch('https://api.supermemory.ai/v4/profile', {
    method: 'POST',
    headers: { /* ... */ },
    body: JSON.stringify({ containerTag: userId })
  });
  const profileData = await profileResponse.json();
  
  // 2. Search for query-specific memories
  const searchResponse = await fetch('https://api.supermemory.ai/v3/search', {
    method: 'POST',
    headers: { /* ... */ },
    body: JSON.stringify({ 
      q: userQuery,
      containerTag: userId 
    })
  });
  const searchData = await searchResponse.json();
  
  // 3. Combine both in your prompt
  return {
    systemPrompt: `User Profile:\n${profileData.profile.static?.join('\n')}`,
    queryContext: searchData.results
  };
}
Or you can also juse use the q parameter in the v4/profiles endpoint to get those search results. I just wanted to demonstrate how you can use search and profile separately, so I put this elaborate code snippet.

How Profiles are Built

Profiles are automatically constructed and maintained through Supermemory’s ingestion pipeline:
1

Content Ingestion

When users add documents, chat, or any content to Supermemory, it goes through the standard ingestion workflow.
2

Intelligence Extraction

AI analyzes the content to extract not just memories, but also facts about the user themselves.
3

Profile Operations

The system generates profile operations (add, update, or remove facts) based on the new information.
4

Automatic Updates

Profiles are updated in real-time, ensuring they always reflect the latest information about the user.
You don’t need to manually manage profiles - they’re automatically maintained as users interact with your system. Just ingest content normally, and profiles build themselves.

Common Use Cases

Personalized AI Assistants

Profiles ensure your AI assistant remembers user preferences, expertise, and context across conversations.

Customer Support Systems

Support agents (or AI) instantly see customer history, preferences, and current issues without manual searches.

Educational Platforms

Adapt content difficulty and teaching style based on the learner’s profile and progress.

Development Tools

IDE assistants that understand your coding style, current projects, and technical preferences.

Performance Benefits

Profiles provide significant performance improvements:
MetricWithout ProfilesWith Profiles
Context Retrieval3-5 search queries1 profile call
Response Time200-500ms50-100ms
Token UsageHigh (multiple searches)Low (single response)
ConsistencyVaries by search qualityAlways comprehensive
I