Interface ITextGenerationSettings
- Namespace
- LMKit.TextGeneration
- Assembly
- LM-Kit.NET.dll
Represents the settings used to control text generation behavior. This includes specifying the sampling strategy, repetition penalties, stop sequences, and optional grammar enforcement for structured and controlled output.
public interface ITextGenerationSettings
Examples
Configure sampling, stop sequences, and grammar on a conversation:
using LMKit.Model;
using LMKit.TextGeneration;
using LMKit.TextGeneration.Sampling;
LM model = LM.LoadFromModelID("gemma3:4b");
var conversation = new SingleTurnConversation(model);
// Adjust sampling strategy
conversation.SamplingMode = new RandomSampling { Temperature = 0.7f };
// Add stop sequences
conversation.StopSequences.Add("###");
// Disable repetition penalty when using grammar
conversation.RepetitionPenalty.Disable();
conversation.Grammar = new Grammar(Grammar.PredefinedGrammar.Json);
// Limit output length
conversation.MaximumCompletionTokens = 512;
var result = conversation.Submit("List three planets as JSON.");
Console.WriteLine(result.Completion);
Properties
- Grammar
Gets or sets the Grammar object used to enforce grammatical rules during text generation. This allows for controlled and structured output from the model.
- LogitBias
A LogitBias object for adjusting the likelihood of specific tokens during text generation.
- MaximumCompletionTokens
Defines the maximum number of tokens (text chunks) permitted for text completion or generation.
- RepetitionPenalty
Gets the RepetitionPenalty object that specifies the rules for repetition penalties applied during text completion.
- SamplingMode
Gets or sets the TokenSampling object that specifies the sampling strategy used during text completion.
- StopSequences
Gets the list of sequences that will cause the API to stop generating additional tokens (or text chunks). The resultant text completion will exclude any occurrences of the specified stop sequences.