Table of Contents

Enum ModelCapabilities

Namespace
LMKit.Model
Assembly
LM-Kit.NET.dll

Flags enum describing the capabilities a model supports in LM-Kit. Multiple values can be combined with bitwise operations to represent models that support several capabilities.

[Flags]
public enum ModelCapabilities

Fields

None = 0

Indicates no specific capability is assigned. Useful for initialization or undefined roles.

TextEmbeddings = 1

Indicates the model supports text embedding generation. Used for semantic similarity, clustering, and retrieval.

TextGeneration = 2

Indicates the model supports general text generation, such as content creation, summarization, and free-form generation.

Chat = 4

Indicates the model supports chat-oriented conversation, including dialogue, chatbots, and question answering.

CodeCompletion = 8

Indicates the model supports code generation and completion, assisting developers with suggestions and snippet synthesis.

SentimentAnalysis = 16

Indicates the model supports sentiment and emotion analysis, including positive, negative, neutral, and specific emotions.

Math = 32

Indicates the model supports mathematical reasoning and problem solving, including equation solving, symbolic manipulation, and numerical calculations.

Vision = 64

Indicates the model supports vision-language tasks (VLM), combining image and text understanding for multimodal reasoning, recognition, and description generation.

ImageEmbeddings = 128

Indicates the model supports image embedding generation, used for image similarity, search, and clustering.

TextReranking = 256

Indicates the model supports text reranking, reordering candidate results by relevance or quality (e.g., in RAG/search).

SpeechToText = 512

Indicates the model supports speech-to-text transcription.

VoiceActivityDetection = 1024

Indicates the model supports voice-activity detection (VAD), distinguishing speech from silence or background noise.

ImageSegmentation = 2048

Indicates the model supports image segmentation, partitioning images into regions or objects for analysis and understanding.