Class AfterTokenSamplingEventArgs
- Namespace
- LMKit.TextGeneration.Events
- Assembly
- LM-Kit.NET.dll
Provides data for the event that is raised after a token has been selected (sampled) by the text generation process.
public sealed class AfterTokenSamplingEventArgs : EventArgs
- Inheritance
-
AfterTokenSamplingEventArgs
- Inherited Members
Examples
The following example shows how to subscribe to the AfterTokenSampling event, inspect the selected token and its probability, log the top alternative candidates, and stop generation when the context window is nearly full.
using LMKit.Model;
using LMKit.TextGeneration;
using LMKit.TextGeneration.Events;
LM model = LM.LoadFromModelID("gemma3:4b");
var chat = new MultiTurnConversation(model);
chat.AfterTokenSampling += (sender, e) =>
{
// Log the selected token text and its probability.
Console.Write(e.TextChunk);
Console.WriteLine($" [prob={e.TokenProbability:P2}, perplexity={e.Perplexity:F2}]");
// Inspect the top 3 alternative candidates.
for (int i = 0; i < 3; i++)
{
string text = e.GetTokenCandidateTextChunkByRank(i);
float prob = e.GetTokenCandidateProbabilityByRank(i);
Console.WriteLine($" Rank {i}: \"{text}\" ({prob:P2})");
}
// Stop generation when context space is running low.
if (e.ContextRemainingSpace < 100)
{
e.Stop = true;
e.KeepLast = true;
}
};
var response = chat.Submit("Write a short poem about the ocean.");
Remarks
This event argument class gives you access to various properties of the language model's state immediately after the sampling process picks the next token. This includes:
- The language model instance.
- The selected token and its related text and probabilities.
- Context size and the remaining space within the model's context window.
- Perplexity measurements.
You can modify certain aspects of the generation process (e.g., the selected token, or whether to stop early) before generation continues.
Properties
- ContextRemainingSpace
Gets the remaining space in the context window.
- ContextSize
Gets the size of the underlying context window used by the language model.
- KeepLast
Gets or sets a value indicating whether to include the last generated token in the response when the generation is stopped prematurely.
- Perplexity
Gets the perplexity of the model's predictions up to this point.
- Stop
Gets or sets a value indicating whether the text completion process should be stopped prematurely.
- TextChunk
Gets the text representation of the currently selected token.
- Token
Gets or sets the identifier of the selected token for text completion.
- TokenProbability
Gets the probability of the currently selected token, ranging from 0 to 1.
Methods
- GetTokenCandidateByRank(int)
Retrieves the token ID of a candidate token based on its rank order in terms of likelihood.
- GetTokenCandidateProbability(int)
Retrieves the probability of a specified token by its token ID.
- GetTokenCandidateProbabilityByRank(int)
Retrieves the probability of a candidate token based on its rank order in terms of likelihood.
- GetTokenCandidateTextChunkByRank(int)
Retrieves the text representation of a candidate token based on its rank order in terms of likelihood.