Table of Contents

How Do I Measure and Evaluate the Quality of AI-Generated Output?


TL;DR

There is no single metric that captures AI output quality. The most effective approach combines automated checks (format validation, keyword matching, grammar constraints) with comparison testing (run the same prompts across models or versions and compare results). LM-Kit.NET provides tools that support both: grammar-constrained decoding for format guarantees, the performance score API for hardware evaluation, and the MemoryEstimation API for capacity planning. For content quality, build a test suite of representative prompts and evaluate outputs against expected answers.


Strategy 1: Build a Prompt Test Suite

Create a set of representative prompts that cover your application's key use cases, along with expected outputs or evaluation criteria:

using LMKit.Model;
using LMKit.TextGeneration;

var testCases = new[]
{
    new { Prompt = "What is the capital of France?", Expected = "Paris" },
    new { Prompt = "Summarize: LM-Kit.NET is a .NET SDK for local AI.", ShouldContain = new[] { "SDK", "local", "AI" } },
    new { Prompt = "Classify as positive or negative: Great product!", Expected = "positive" },
};

using LM model = LM.LoadFromModelID("qwen3.5:9b");
var chat = new MultiTurnConversation(model);

foreach (var test in testCases)
{
    string result = chat.Submit(test.Prompt);

    bool pass = test.Expected != null
        ? result.Contains(test.Expected, StringComparison.OrdinalIgnoreCase)
        : test.ShouldContain.All(kw => result.Contains(kw, StringComparison.OrdinalIgnoreCase));

    Console.WriteLine($"{(pass ? "PASS" : "FAIL")}: {test.Prompt}");
    chat.ClearHistory();
}

Run this test suite after changing models, updating prompts, or modifying system instructions to catch regressions.


Strategy 2: Compare Models Side by Side

When evaluating a new model, run the same prompts on both the current and candidate model and compare outputs:

using LMKit.Model;
using LMKit.TextGeneration;

using LM currentModel = LM.LoadFromModelID("qwen3.5:4b");
using LM candidateModel = LM.LoadFromModelID("qwen3.5:9b");

string prompt = "Explain the observer pattern in two sentences.";

var chatA = new MultiTurnConversation(currentModel);
var chatB = new MultiTurnConversation(candidateModel);

string resultA = chatA.Submit(prompt);
string resultB = chatB.Submit(prompt);

Console.WriteLine($"Current (4B): {resultA}\n");
Console.WriteLine($"Candidate (8B): {resultB}\n");

Compare on dimensions that matter for your use case: factual accuracy, completeness, formatting, tone, and response length.


Strategy 3: Use Grammar Constraints for Verifiable Output

Grammar-constrained decoding guarantees that output matches a specific format. This turns quality evaluation into a structural validation problem:

using LMKit.TextGeneration.Sampling;

var chat = new MultiTurnConversation(model);

// Force output into a verifiable JSON schema
chat.Grammar = Grammar.FromJsonSchema(@"{
    ""type"": ""object"",
    ""properties"": {
        ""sentiment"": { ""enum"": [""positive"", ""negative"", ""neutral""] },
        ""confidence"": { ""type"": ""number"", ""minimum"": 0, ""maximum"": 1 }
    },
    ""required"": [""sentiment"", ""confidence""]
}");

string result = chat.Submit("Analyze sentiment: The product exceeded my expectations.");
// Result is guaranteed to be valid JSON with the specified structure

With grammar constraints, you can automatically validate 100% of outputs for format correctness.


Strategy 4: Measure Response Characteristics

Track quantitative properties of model responses to detect drift or degradation over time:

using System.Diagnostics;

var sw = Stopwatch.StartNew();
string result = chat.Submit(prompt);
sw.Stop();

// Track these metrics across runs
Console.WriteLine($"Response length: {result.Length} chars");
Console.WriteLine($"Generation time: {sw.ElapsedMilliseconds} ms");
Console.WriteLine($"Contains expected format: {result.StartsWith("{")}");

Metrics to track:

  • Response length: Sudden changes may indicate quality issues.
  • Generation time: Slower responses may indicate context overflow or hardware issues.
  • Format compliance: Percentage of responses matching expected structure.
  • Keyword presence: Percentage of responses containing expected domain terms.

Strategy 5: LLM-as-Judge

Use a larger or different model to evaluate the output of your production model. This scales better than manual review:

using LMKit.Model;
using LMKit.TextGeneration;

// Production model generates the answer
var productionChat = new MultiTurnConversation(productionModel);
string answer = productionChat.Submit("What causes ocean tides?");

// Judge model evaluates the answer
var judgeChat = new MultiTurnConversation(judgeModel);
judgeChat.SystemPrompt = @"You are an answer quality evaluator.
Rate the following answer on accuracy (1-5) and completeness (1-5).
Respond as JSON: {""accuracy"": N, ""completeness"": N, ""issues"": ""...""}";

judgeChat.Grammar = new Grammar(Grammar.PredefinedGrammar.Json);
string evaluation = judgeChat.Submit($"Question: What causes ocean tides?\nAnswer: {answer}");

When to Re-Evaluate

Run your evaluation suite when:

  • Changing models (even within the same family, e.g., 4B to 8B)
  • Updating system prompts or instructions
  • Modifying RAG pipelines (chunking strategy, embedding model, retrieval parameters)
  • Updating the LM-Kit.NET SDK (model catalog may point to newer model files)
  • Changing sampling parameters (temperature, top-p, grammar constraints)

Share