Glossary of Key Concepts
LM-Kit.NET covers a wide surface of Generative AI capabilities. This glossary gives you short, practical definitions for every major concept you will encounter while building with the SDK.
Tip
If you are new to LM-Kit.NET, start with Getting Started and come back here whenever you hit an unfamiliar term. Each entry links to deeper guides and API references so you can go as far as you need.
How to use this glossary
The sidebar lists every term alphabetically. You can also browse by theme below to find the group of concepts most relevant to what you are building.
Each glossary entry follows the same structure:
- TL;DR for a quick answer
- Definition with context and LM-Kit.NET specifics
- Code example you can copy into a project
- Related topics to keep exploring
Browse by theme
AI Agents and Orchestration
Start here if you are building autonomous workflows, tool-calling assistants, or multi-agent systems.
Key concepts: AI Agents, Orchestration, Planning, Chain-of-Thought, Tools, Tool Permission Policies, Filters and Middleware, Delegation, Memory, Skills, Guardrails, Function Calling, MCP
Recommended reading order for newcomers:
- AI Agents to understand what an agent is
- AI Agent Tools and Function Calling to see how agents interact with the outside world
- AI Agent Planning and AI Agent Reasoning for decision-making strategies
- AI Agent Orchestration when you need multiple agents working together
Model Architecture and Training
Start here if you want to understand how models work internally, or if you plan to fine-tune or quantize a model.
Key concepts: LLM, SLM, Transformer Architecture, Attention Mechanism, Context Windows, Weights, KV-Cache, Mixture of Experts, Distributed Inference, Quantization, Fine-Tuning, LoRA Adapters
Recommended reading order for newcomers:
- Large Language Model (LLM) for the big picture
- Token and Tokenization to understand what models actually process
- Context Windows and KV-Cache for memory and performance awareness
- Quantization when you need to run models on constrained hardware
Inference and Generation
Start here to master how text is actually produced, including sampling strategies and output constraints.
Key concepts: Inference, Temperature, Sampling, Dynamic Sampling, Grammar Sampling, Hallucination, Logits, Perplexity, Chat Completion, Text Completion, Speculative Decoding, Symbolic AI
Recommended reading order for newcomers:
- Inference for the core concept
- Chat Completion and Text Completion for the two main generation modes
- Sampling and Dynamic Sampling to control output quality
- Grammar Sampling when you need structured, schema-compliant output
Retrieval and Knowledge
Start here if you are building search, Q&A, or knowledge-grounded applications.
Key concepts: Embeddings, RAG, Chunking, Reranking, Semantic Similarity, Vector Database
Recommended reading order for newcomers:
- Embeddings to understand vector representations
- RAG (Retrieval-Augmented Generation) for the core pattern
- Vector Database and Reranking to refine retrieval quality
Document Processing and Extraction
Start here if you work with PDFs, scanned images, invoices, or structured data pipelines.
Key concepts: Intelligent Document Processing (IDP), Optical Character Recognition (OCR), Structured Data Extraction
Text Processing and Analysis
Start here for NLP fundamentals like entity extraction, prompt design, and tokenization.
Key concepts: Named Entity Recognition (NER), Prompt Engineering, Few-Shot Learning, Token, Tokenization
Speech and Audio
Key concepts: Voice Activity Detection (VAD)
Vision and Multimodal
Key concepts: Vision Language Models (VLM)
Where to go next
| Goal | Resource |
|---|---|
| Install LM-Kit.NET and run your first prompt | Getting Started |
| Build an AI agent with tools | Your First AI Agent |
| Explore practical how-to guides | How-To Guides |
| Browse working sample code | Samples Overview |
| Dive into the API | API Reference |