Table of Contents

Class LMKitPromptExecutionSettings

Namespace
LMKit.Integrations.SemanticKernel
Assembly
LM-Kit.NET.Integrations.SemanticKernel.dll

Provides a bridge between LMKit.NET text generation settings and Microsoft Semantic Kernel prompt execution settings. This class extends Microsoft.SemanticKernel.PromptExecutionSettings and implements ITextGenerationSettings, allowing for the configuration of various text generation parameters such as sampling, repetition penalties, stop sequences, grammar, logit bias, and maximum token counts.

public class LMKitPromptExecutionSettings : PromptExecutionSettings, ITextGenerationSettings
Inheritance
PromptExecutionSettings
LMKitPromptExecutionSettings
Implements
Inherited Members
PromptExecutionSettings.Freeze()
PromptExecutionSettings.Clone()
PromptExecutionSettings.ThrowIfFrozen()
PromptExecutionSettings.DefaultServiceId
PromptExecutionSettings.ServiceId
PromptExecutionSettings.ModelId
PromptExecutionSettings.FunctionChoiceBehavior
PromptExecutionSettings.ExtensionData
PromptExecutionSettings.IsFrozen

Constructors

LMKitPromptExecutionSettings(LM)

Initializes a new instance of the LMKitPromptExecutionSettings class using the specified LM model.

Properties

Grammar

Gets or sets the grammar rules applied during text generation.

LogitBias

Gets the logit bias settings that influence token selection probabilities during generation.

MaximumCompletionTokens

Gets or sets the maximum number of tokens to be generated in a single completion.

RepetitionPenalty

Gets the repetition penalty settings applied to reduce repeated token outputs during generation.

ResultsPerPrompt

Gets or sets the number of results to generate per prompt.

SamplingMode

Gets or sets the sampling mode used for token selection during text generation.

StopSequences

Gets the list of stop sequences that signal the text generation to halt.

SystemPrompt

Specifies the system prompt applied to the model before forwarding the user's requests.

The default value is "You are a chatbot that always responds promptly and helpfully to user requests."