How Does the Model Decide Which Skill to Activate?
TL;DR
LM-Kit.NET supports two activation modes. Manual activation lets your application control which skill is active, using slash commands (/email-writer) or programmatic selection via SkillActivator. Model-driven activation registers a SkillTool that lets the model autonomously choose the right skill from the registry based on the user's request. You can also use embedding-based discovery to find the best-matching skill using semantic similarity.
Mode 1: Manual Activation (Application-Controlled)
Your application decides which skill to activate. The user types a slash command or the app selects a skill based on its own logic:
using LMKit.Agents.Skills;
using LMKit.TextGeneration;
var registry = new SkillRegistry();
registry.LoadFromDirectory("./skills");
var activator = new SkillActivator(registry);
var chat = new MultiTurnConversation(model);
string userInput = "/email-writer Apologize for the delayed shipment";
// Parse the slash command
if (registry.TryParseSlashCommand(userInput, out var skill, out var arguments))
{
// Format skill instructions for injection into the conversation
string enrichedPrompt = activator.FormatForInjection(skill, SkillInjectionMode.UserMessage);
string fullPrompt = enrichedPrompt + "\n\n---\n\n" + arguments;
string response = chat.Submit(fullPrompt);
}
Injection Modes
Skills can be injected at different points in the conversation:
| Mode | Where Instructions Go | Best For |
|---|---|---|
| SystemPrompt | Prepended to the system prompt | Persistent behavioral guidelines active throughout conversation |
| UserMessage | Inserted as a user message | Task-specific instructions for one interaction |
| ToolResult | Returned as a tool result | Function-calling workflows (default for SkillTool) |
Best for: Menu-driven applications, predictable skill selection, when the application (not the model) should control activation.
Mode 2: Model-Driven Activation (AI-Controlled)
Register a SkillTool and the model decides when to activate a skill based on the user's request:
using LMKit.Agents.Skills;
using LMKit.TextGeneration;
var registry = new SkillRegistry();
registry.LoadFromDirectory("./skills");
var chat = new MultiTurnConversation(model);
chat.Tools.Register(new SkillTool(registry));
// The model sees available skills and activates the right one
string response = chat.Submit("Write an apology email about the delayed shipment");
// Model internally calls: activate_skill("email-writer")
// Receives full instructions, then follows them
How it works:
- The SkillTool's description dynamically lists all available skills and their descriptions.
- The model reads this list and decides whether a skill is relevant.
- If so, the model calls
activate_skill("skill-name"). - The tool returns the skill's full instructions as a tool result.
- The model follows the instructions to produce its response.
You can monitor skill activation with an event:
var skillTool = new SkillTool(registry);
skillTool.SkillActivated += (sender, args) =>
{
Console.WriteLine($"Model activated skill: {args.SkillName}");
};
chat.Tools.Register(skillTool);
Best for: Autonomous agents, conversational UIs, natural language task routing where the model should pick the right skill.
Embedding-Based Skill Discovery
For large skill registries, use semantic similarity to find the best-matching skill:
// Keyword-based matching (fast, no model needed)
var matches = registry.FindMatches("help me review my pull request", maxResults: 3);
// Embedding-based matching (more accurate, requires embedding model)
var matches = registry.FindMatchesWithEmbeddings(
"help me review my pull request",
embeddingProvider: text => embeddingModel.Embed(text),
maxResults: 3,
minScore: 0.5f
);
foreach (var match in matches)
Console.WriteLine($"{match.Skill.Name}: {match.Score:P0}");
// Output: code-review: 92%, explain: 34%
Comparison
| Aspect | Manual (SkillActivator) | Model-Driven (SkillTool) |
|---|---|---|
| Who activates? | Application code | Model via function calling |
| Requires tool support? | No | Yes |
| Slash commands? | Yes (/skill-name args) |
No |
| Model picks the skill? | No. App picks. | Yes. Model decides autonomously. |
| Control level | Full application control | Model decides based on context |
| Use case | Menu-driven apps, explicit commands | Autonomous agents, natural language |
Progressive Loading
Skills use lazy loading for efficiency. During discovery, only the name and description are loaded (fast registry population). Full instructions are loaded only when a skill is activated. Resource file contents are loaded only when explicitly accessed. This means a registry with hundreds of skills has minimal memory overhead.
📚 Related Content
- What are Agent Skills and how do they differ from tools?: Skills vs tools conceptual overview.
- Can I share and distribute Agent Skills across projects?: Skill packaging and remote loading.
- What is function calling and tool use in LM-Kit.NET?: How the model calls tools (including SkillTool).
- Add Skills to Your AI Assistant: Step-by-step implementation guide.