Getting Started with LM-Kit.NET
LM-Kit.NET helps you run modern AI capabilities directly in .NET applications with a developer experience focused on speed, control, and production readiness.
This page is your complete onboarding guide. You will go from installation to a working local AI app, then to the best next steps for your product.
Tip
If you are evaluating LM-Kit.NET for the first time, follow this page from top to bottom once. After that, use it as a checklist whenever you start a new project.
What you can build
With LM-Kit.NET, teams commonly build:
- AI agents with tool calling and memory
- Document Q and A over PDFs and mixed file formats
- Structured extraction pipelines for invoices, contracts, and forms
- Text intelligence workflows such as classification, sentiment, and summarization
- Speech workflows for transcription, meeting notes, and action item extraction
If you want inspiration before coding, visit LM-Kit solutions.
5 minute quick start
1) Create a new .NET app
dotnet new console -n LmKitQuickStart
cd LmKitQuickStart
2) Install LM-Kit.NET
dotnet add package LM-Kit.NET
Optional GPU backends:
dotnet add package LM-Kit.NET.Backend.Cuda12.Windows
dotnet add package LM-Kit.NET.Backend.Cuda12.Linux
3) Add your first LM-Kit.NET program
Replace Program.cs with:
using LMKit.Global;
using LMKit.Model;
using LMKit.TextGeneration;
Runtime.LogLevel = Runtime.LMKitLogLevel.Info;
Runtime.Initialize();
using LM model = LM.LoadFromModelID("gemma3:4b");
var chat = new MultiTurnConversation(model);
var answer = chat.Submit("Give me 3 practical tips for writing safer C# code.");
Console.WriteLine(answer.Completion);
4) Run
dotnet run
On first run, the selected model is downloaded automatically. Next runs use local cache.
Prerequisites
Before integrating LM-Kit.NET in production, confirm these basics:
- Target frameworks: .NET Standard 2.0 through .NET 10.0
- Operating systems: Windows, Linux, macOS
- Recommended IDEs: Visual Studio 2019+, Rider, Visual Studio Code
- Hardware: CPU works out of the box, GPU acceleration recommended for larger models and higher throughput
For best compatibility and performance, use current .NET SDK and updated GPU drivers.
Installation options
Option A: dotnet CLI (recommended)
dotnet add package LM-Kit.NET
Option B: NuGet Package Manager Console
Install-Package LM-Kit.NET
Option C: NuGet UI
- Right-click your project.
- Select Manage NuGet Packages.
- Search for LM-Kit.NET.
- Install the latest stable version.
Optional acceleration packages
Install only what matches your environment:
- NVIDIA on Windows:
LM-Kit.NET.Backend.Cuda12.Windows - NVIDIA on Linux:
LM-Kit.NET.Backend.Cuda12.Linux
For full tuning and backend guidance, see Configure GPU Backends.
Runtime initialization checklist
Use this pattern in app startup:
using LMKit.Global;
using LMKit.Licensing;
// Optional if your plan requires it.
LicenseManager.SetLicenseKey("YOUR_LICENSE_KEY");
Runtime.LogLevel = Runtime.LMKitLogLevel.Info;
Runtime.EnableCuda = true; // Set true only when CUDA backend is installed and available.
Runtime.Initialize();
Checklist:
- Initialize runtime exactly once during startup
- Set log level before initialization
- Enable CUDA only on compatible machines
- Keep initialization close to DI and app bootstrapping code
Choose the right first model
Your first model choice affects latency, quality, and memory usage.
| If your priority is | Recommended starting point |
|---|---|
| Fast local chat on modest hardware | Smaller 3B to 8B instruct models |
| Better reasoning and richer responses | Mid-size instruct models with GPU support |
| Semantic search and retrieval | Dedicated embedding models |
| OCR and image understanding | Vision language models |
| Speech transcription | Whisper family models |
Use these deeper guides when ready:
First production grade features to add
Once your first prompt works, add these in order:
- Memory estimation: validate that your model fits before loading with Estimating Memory and Context Size
- Observability: add metrics and tracing with Add Telemetry and Observability
- Prompt and schema versioning: use Prompt Templates to separate prompt structure from data, and store templates and extraction schemas in source control
- Failure handling: retries, fallback models, and timeout policies
- Evaluation set: keep a small fixed dataset for regression checks
- Security controls: validate tool access and sensitive output paths
This sequence gives immediate reliability gains with minimal complexity.
Common onboarding paths
I want an AI agent in production
I want document intelligence and RAG
- Build a Private Document Q&A System
- Build a RAG Pipeline for Retrieval-Augmented Generation
- Improve RAG Results with Reranking
I want speech and meeting workflows
- Transcribe Audio with Local Speech-to-Text
- Generate Structured Meeting Notes from Audio Recordings
- Extract Action Items and Tasks from Meeting Recordings
I want all practical guides
Microsoft AI ecosystem integration
LM-Kit.NET includes bridges for Microsoft AI abstractions:
These packages let you plug LM-Kit.NET into existing Semantic Kernel and Microsoft.Extensions.AI pipelines with familiar interfaces.
Working demos:
Troubleshooting quick checks
If something does not work on first run:
- Confirm package installation in the correct project
- Confirm runtime initialization is executed
- Confirm model ID spelling
- Confirm GPU backend package matches OS and hardware
- Temporarily set log level to
Debugfor diagnostics
For model loading behavior and caching details, see Understanding Model Loading and Caching.
Support and community
You now have a complete baseline for onboarding and first delivery. Next, pick one path above and ship your first end to end AI feature.