Table of Contents

Enum MemoryExtractionMode

Namespace
LMKit.Agents.Memory
Assembly
LM-Kit.NET.dll

Specifies the strategy used to automatically extract memorable facts from conversations.

public enum MemoryExtractionMode

Fields

None = 0

No automatic memory extraction. Memories must be stored manually via SaveInformationAsync(string, string, string, MetadataCollection, CancellationToken).

This is the default mode. Use it when you want full control over what gets stored in memory, or when the agent does not need to learn from conversations.

LlmBased = 1

Uses the language model to analyze each conversation turn and extract facts worth remembering.

After each conversation turn, the extractor sends the user message and assistant response to the LLM with a structured extraction prompt. The LLM identifies facts, preferences, and contextual information, classifies them by MemoryType, and assigns an importance level.

Extracted memories are deduplicated against existing memory before storage to prevent redundant entries. By default, extraction runs asynchronously (fire and forget) so it does not block the conversation flow.

The extraction model defaults to the agent's chat model but can be overridden via ExtractionModel to use a lighter model for cost efficiency.

Examples

Example: Configuring automatic memory extraction

using LMKit.Agents;
using LMKit.Agents.Memory;
using LMKit.Model;

using var chatModel = new LM("path/to/chat-model.gguf");
using var embeddingModel = new LM("path/to/embedding-model.gguf");

var memory = new AgentMemory(embeddingModel);
memory.ExtractionMode = MemoryExtractionMode.LlmBased;

var agent = Agent.CreateBuilder(chatModel)
    .WithMemory(memory)
    .Build();

Remarks

Memory extraction analyzes each conversation turn (user message and assistant response) to identify facts, preferences, and contextual information worth persisting in AgentMemory. The extracted memories enable the agent to recall relevant context in future sessions.

Choosing a Mode

  • NoneUse when memory is populated manually or via external processes.
  • LlmBasedUse for conversational agents that should automatically learn from interactions.