👉 Try the demo: https://github.com/LM-Kit/lm-kit-net-samples/tree/main/console_net/agents/mcp_integration
MCP Integration (Full Protocol) for C# .NET Applications
🎯 Purpose of the Demo
MCP Integration demonstrates how to use LM-Kit.NET to connect AI assistants with external tools via the Model Context Protocol (MCP), covering the complete protocol surface: tools, resources, sampling, roots, elicitation, progress tracking, logging, resource subscriptions, and cancellation.
The sample shows how to:
- Connect to MCP servers using
McpClient. - Discover available MCP tools dynamically.
- Integrate MCP tools with
MultiTurnConversation. - Handle sampling requests where the server asks the client for LLM completions.
- Handle elicitation requests where the server asks the user for structured input.
- Manage filesystem roots that the client exposes to the server.
- Track progress on long-running server operations.
- Receive structured log messages from servers with configurable log levels.
- Browse resources and resource templates, and subscribe to update notifications.
- Handle server-initiated cancellation of in-progress requests.
Why MCP with LM-Kit.NET?
- Complete protocol: supports all MCP capabilities, not just tools.
- Ecosystem access: connect to hundreds of existing MCP servers.
- Standardization: uses the open MCP specification (2025-06-18).
- Discovery: dynamically discover tools, resources, and prompts.
- Local-first: all AI processing runs on your hardware.
👥 Industry Target Audience
- Enterprise Developers: integrate AI assistants with corporate tools and data sources via MCP.
- Platform Engineers: build MCP-aware infrastructure that connects local LLMs to external services.
- AI/ML Engineers: explore the full MCP protocol surface with local inference.
- Tool Builders: test MCP server implementations against a complete client.
🚀 Problem Solved
- Tool connectivity: AI assistants can invoke tools from any MCP server.
- Bi-directional communication: servers can request LLM completions (sampling) and user input (elicitation).
- Resource awareness: servers can expose data through resources and templates.
- Observable: progress tracking, logging, and catalog change notifications provide visibility.
- Filesystem boundaries: roots let you control which paths the server can access.
💻 Sample Application Description
Console app that:
- Connects to a specified MCP server (HTTP/SSE transport).
- Discovers available tools from the server and registers them with
MultiTurnConversation. - Handles all server-initiated requests: sampling, elicitation, roots, progress, and log messages.
- Allows natural language queries that invoke MCP tools.
- Provides interactive commands for resource browsing, root management, and log level control.
✨ Key Features
- Full MCP protocol: tools, resources, sampling, elicitation, roots, progress, logging, cancellation.
- Sampling handler: fulfills server-to-client LLM completion requests using the loaded model.
- Elicitation handler: prompts the user when the server needs structured input.
- Resource browser: lists resources, templates, and supports subscription for real-time updates.
- Root management: add/remove filesystem roots that the client exposes to servers.
- Log level control: configure the minimum severity level for server log messages.
- Capability detection: displays which protocol features the server supports.
🏗️ Architecture
┌─────────────────────┐
│ User / Console │
└──────────┬──────────┘
│
┌──────────▼──────────┐
│ MultiTurnConversation │
│ (Local LLM Inference) │
└──────────┬──────────┘
│
┌──────────▼──────────┐
│ McpClient │ ◄─── Sampling (server asks for LLM completion)
│ (Full Protocol) │ ◄─── Elicitation (server asks for user input)
│ │ ◄─── Progress / Logging / Cancellation
│ │ ──── Roots (filesystem boundaries)
│ │ ──── Resource subscriptions
└──────────┬──────────┘
│ HTTP + SSE
┌──────────▼──────────┐
│ MCP Server │
│ (Tools/Resources) │
└─────────────────────┘
Built-In Models (menu)
On startup, the sample shows a model selection menu:
| Option | Model | Approx. VRAM Needed |
|---|---|---|
| 0 | Mistral Ministral 3 8B | ~6 GB VRAM |
| 1 | Meta Llama 3.1 8B | ~6 GB VRAM |
| 2 | Google Gemma 3 12B | ~9 GB VRAM |
| 3 | Microsoft Phi-4 Mini 3.8B | ~3.3 GB VRAM |
| 4 | Alibaba Qwen-3 8B | ~5.6 GB VRAM |
| 5 | Microsoft Phi-4 14.7B | ~11 GB VRAM |
| 6 | IBM Granite 4 7B | ~6 GB VRAM |
| 7 | OpenAI GPT OSS 20B | ~16 GB VRAM |
| 8 | Z.ai GLM 4.7 Flash 30B | ~18 GB VRAM |
| other | Custom model URI | depends on model |
MCP Protocol Features
Sampling (Server-to-Client LLM Requests)
mcpClient.SetSamplingHandler((request, cancellationToken) =>
{
// Server sends messages + system prompt + max tokens
// Client runs inference and returns the completion
var result = samplingChat.Submit(prompt, cancellationToken);
return Task.FromResult(McpSamplingResponse.FromText(result.Completion, "local-model"));
});
Elicitation (Server-Requested User Input)
mcpClient.SetElicitationHandler((request, cancellationToken) =>
{
// Server sends a message and optional JSON schema
// Client prompts the user and returns structured data
Console.Write(request.Message);
var input = Console.ReadLine();
return Task.FromResult(McpElicitationResponse.Accept(
new Dictionary<string, object> { ["response"] = input }));
});
Roots (Filesystem Boundaries)
// Add a root so the server knows which paths are available
mcpClient.AddRoot(Environment.CurrentDirectory, "working-directory");
// React when the server queries available roots
mcpClient.RootsRequested += (sender, e) => { /* log or update */ };
Progress, Logging, Cancellation
mcpClient.ProgressReceived += (sender, e) =>
Console.Write($"\r{e.Percentage:F0}% - {e.Message}");
mcpClient.LogMessageReceived += (sender, e) =>
Console.WriteLine($"[{e.Level}] {e.Logger}: {e.Data}");
mcpClient.CancellationReceived += (sender, e) =>
Console.WriteLine($"Cancelled request {e.RequestId}: {e.Reason}");
mcpClient.SetLogLevel(McpLogLevel.Info);
Resources & Subscriptions
var resources = mcpClient.GetResources();
var templates = mcpClient.GetResourceTemplates();
mcpClient.SubscribeToResource("resource://some-uri");
mcpClient.ResourceUpdated += (sender, e) =>
Console.WriteLine($"Resource changed: {e.Uri}");
⚙️ Getting Started
Prerequisites
- .NET 8.0 or later
- VRAM for selected model (3-18 GB)
- Internet access for connecting to MCP servers
Download
git clone https://github.com/LM-Kit/lm-kit-net-samples
cd lm-kit-net-samples/console_net/agents/mcp_integration
Run
dotnet build
dotnet run
Then:
- Select a model by typing 0-8, or paste a custom model URI.
- Choose an MCP server (or enter a custom URI).
- Chat with the assistant using the connected tools.
- Use
/resources,/roots,/loglevel,/capabilitiesto explore protocol features.
🔧 Troubleshooting
- Server doesn't support sampling/elicitation: these features depend on the server. Try the "Everything" reference server.
- Authentication failures: check your bearer token when using authenticated servers.
- Timeout on tool calls: some MCP servers have rate limits or slow responses.
- No resources available: many MCP servers only expose tools, not resources.
🚀 Extend the Demo
- Add stdio transport: use
McpClient.ForStdio()for local MCP servers (seemcp_stdio_integrationdemo). - Custom sampling logic: route sampling requests to different models based on
ModelPreferences. - Auto-subscribe: automatically subscribe to all resources on connection.
- Progress UI: build a progress bar for long-running operations.