Table of Contents

Glossary of Key Concepts

LM-Kit.NET covers a wide surface of Generative AI capabilities. This glossary gives you short, practical definitions for every major concept you will encounter while building with the SDK.

Tip

If you are new to LM-Kit.NET, start with Getting Started and come back here whenever you hit an unfamiliar term. Each entry links to deeper guides and API references so you can go as far as you need.


How to use this glossary

The sidebar lists every term alphabetically. You can also browse by theme below to find the group of concepts most relevant to what you are building.

Each glossary entry follows the same structure:

  1. TL;DR for a quick answer
  2. Definition with context and LM-Kit.NET specifics
  3. Code example you can copy into a project
  4. Related topics to keep exploring

Browse by theme

AI Agents and Orchestration

Start here if you are building autonomous workflows, tool-calling assistants, or multi-agent systems.

Key concepts: AI Agents, Orchestration, Planning, Chain-of-Thought, Tools, Tool Permission Policies, Filters and Middleware, Delegation, Memory, Skills, Guardrails, Function Calling, MCP

Recommended reading order for newcomers:

  1. AI Agents to understand what an agent is
  2. AI Agent Tools and Function Calling to see how agents interact with the outside world
  3. AI Agent Planning and AI Agent Reasoning for decision-making strategies
  4. AI Agent Orchestration when you need multiple agents working together

Model Architecture and Training

Start here if you want to understand how models work internally, or if you plan to fine-tune or quantize a model.

Key concepts: LLM, SLM, Transformer Architecture, Attention Mechanism, Context Windows, Weights, KV-Cache, Mixture of Experts, Distributed Inference, Quantization, Fine-Tuning, LoRA Adapters

Recommended reading order for newcomers:

  1. Large Language Model (LLM) for the big picture
  2. Token and Tokenization to understand what models actually process
  3. Context Windows and KV-Cache for memory and performance awareness
  4. Quantization when you need to run models on constrained hardware

Inference and Generation

Start here to master how text is actually produced, including sampling strategies and output constraints.

Key concepts: Inference, Temperature, Sampling, Dynamic Sampling, Grammar Sampling, Hallucination, Logits, Perplexity, Chat Completion, Text Completion, Speculative Decoding, Symbolic AI

Recommended reading order for newcomers:

  1. Inference for the core concept
  2. Chat Completion and Text Completion for the two main generation modes
  3. Sampling and Dynamic Sampling to control output quality
  4. Grammar Sampling when you need structured, schema-compliant output

Retrieval and Knowledge

Start here if you are building search, Q&A, or knowledge-grounded applications.

Key concepts: Embeddings, RAG, Chunking, Reranking, Semantic Similarity, Vector Database

Recommended reading order for newcomers:

  1. Embeddings to understand vector representations
  2. RAG (Retrieval-Augmented Generation) for the core pattern
  3. Vector Database and Reranking to refine retrieval quality

Document Processing and Extraction

Start here if you work with PDFs, scanned images, invoices, or structured data pipelines.

Key concepts: Intelligent Document Processing (IDP), Optical Character Recognition (OCR), Structured Data Extraction

Text Processing and Analysis

Start here for NLP fundamentals like entity extraction, prompt design, and tokenization.

Key concepts: Named Entity Recognition (NER), Prompt Engineering, Few-Shot Learning, Token, Tokenization

Speech and Audio

Key concepts: Voice Activity Detection (VAD)

Vision and Multimodal

Key concepts: Vision Language Models (VLM)


Where to go next

Goal Resource
Install LM-Kit.NET and run your first prompt Getting Started
Build an AI agent with tools Your First AI Agent
Explore practical how-to guides How-To Guides
Browse working sample code Samples Overview
Dive into the API API Reference
Share